Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The Great Flattening (stratechery.com)
139 points by sturza on May 13, 2024 | hide | past | favorite | 98 comments


" will AI be a bicycle that we control, or an unstoppable train to destinations unknown? To put it in the same terms as the ad, will human will and initiative be flattened, or expanded?"

Interesting line of thinking here. I've never considered this but an observation from my usage of chat AIs at least is an increased willingness to defer my thought-organising process to the AI by just jotting down some random ideas and asking it to make it coherent. I guess on a personal level I'm flattening my own mind?


I find that terrifying -- when I do this, I feel I am willingly giving up hard-earned abilities, atrophying my ability to reason.


I'm in two minds about it - in the strictest literal sense, it's true - but it also rhymes with Plato complaining about writing degrading the ability of scholars to memorize things.


Actually, Plato is not wrong. I'm a big pen and paper user, and write everything down. While it allows me to turn back six months, I can't turn two weeks back without my notebook in some cases.

Brain has a strange ability to understand that information is stored elsewhere. Write it, and you forget it. Take photos in a concert, and memories become fainter. Talk about something important, you start to forget it part by part.

AI is something way bigger. It's a dumb system which mimics us without the essence of the human or personal style, yet does mundane tasks without questioning. So you trade your own fine tuned, honed, polished ability to some GPUs which runs a software which is tuned for the masses. Like trading your beaten tungsten tool with a shiny iron one. Looks nice but way inferior. The ability to do these mundane tasks is the foundation for not doing the same tasks in a mundane way, or doing more complex tasks built on these mundane tasks. You rig your own foundations with explosives. Not wise.

I'd not do that. I don't use any AI systems in any of my tools.


> Take photos in a concert, and memories become fainter.

I don't know about short- or medium-term, but long-term the photos can remain objective anchors for memories that reduce the memory drift for everything all around them. When you have a photo of an event, even if you didn't take it yourself, it locks in many facts that individually weren't important enough to memorize but do constrain other possible facts and thus keep memory more accurate.

When I got into photography I took high-effort high-quality photos everywhere I went. There are so many minor events I wouldn't remember at all if not for the photos, and even for events I do remember, I likely would have forgotten that certain people even attended.

Long term, I can be completely objective about the times, places, people, conditions, etc. of a great many events in my life, and every time I refresh and reinforce memories it's anchored in those objective details.

There's a less objective angle to this that I still acknowledge and enjoy. When you take a good photo of a good moment, and look back on that as representative of the event, it has a way of making the entire event look that positive. You scroll back through a timeline of photos like this, it can make entire years of your life look as good as you want them to.

That's part of why I think there's a big difference between scrolling a backup of all your phone photos vs hand-curating albums (regardless of the device that took them). When you choose what you'll see in future you influence how you're going to feel about it, and that's an under-appreciated mechanism for investing in your future headspace.


Note: I'll take short snips from your quotes to keep this comment tidy.

> don't know about short- or medium-term, but long-term the photos can...

You're absolutely right, however I said concerts for a special reason.

> When I got into photography I took high-effort high-quality photos everywhere I went...

This is also what I do. When walking around in a city, or taking photos in a vacation, etc. You can do this. Because when taking photos in a relatively serene environment (when compared to a concert, esp. an open air, festival one), you can internalize whole event before taking that photo, so that photo becomes an anchor for that event. In a concert, everything is so fast. Trying to concentrate to take a good photo makes you ignore a large chunk which prevents you forming that emotion and memory.

> Long term, I can be completely objective about the times, places, people, conditions, etc. ...

True, but as I said, you had time to internalize that event before taking that shot. This is what it creates the anchor and reinforcing effect. You can't reinforce a memory you didn't form.

> There's a less objective angle to this that I still acknowledge and enjoy. ...

Photos bring joy, but not always. There are many positive photos which makes me feel bitter, or even sad.

> That's part of why I think there's a big difference between scrolling a backup of all your phone photos vs hand-curating albums ...

I can relate to that, but I also take a different approach. If I have the time and feel for it, I challenge myself with "1 scene, 1 shot", regardless of the camera I have with me. This allows you to ingest the scene you're in, take a purposeful look and decide which photo to take, which forces you to create a relationship with your surroundings. When you take that photo and revisit it in the future, that frame will make you remember the whole area where you searched for that one shot, and will bring tons of memories and visions back.

If I don't decide to or can't do that, I curate albums as you do, yes.

If you're interested, my photos are available at

https://flickr.com/zerocoder

https://instagram.com/hbayindir

P.S.: That flickr page needs some cleaning up and tidying.


> Write it, and you forget it.

For me, writing by hand means remembering.

I still lug around an actual notebook to all meetings and take notes by hand. I approximately never look at those notes later, because I remember everything important that I wrote.

I've tried every variant of taking notes electronically and the result is I don't remember any of it. Counterbalancing, it sure is easier to grep a text file than search on paper pages in my notebook.

But, having the info in my head is still orders of magnitude faster than grep, so taking notes on paper wins for me.


> For me, writing by hand means remembering.

When you're writing about a subject or meeting notes, sure. When you're writing a 30-item to do list, your brain will say "Nope, everything is on that notebook, refer to that".

As I noted in my original comment and elsewhere, I'm a heavy pen and paper user. I finish approximately two notebooks a year, plus I do my thinking/research on their dedicated notebooks.

Personal knowledge base and other digital notes are just distillations of these notebooks in a synced/searchable form, and I'm slowly making these public, as I have time.


Can you critique my usage of ChatGPT?

How exactly is this rigging my "own foundations with explosives"?

https://chat.openai.com/share/927ac3b7-2b3c-46ea-a74b-d0dec5...


Putting that there's no guarantees for correctness for AI models aside, you miss the opportunity to read the docs and learn (or at least be aware of) the whole capabilities which may help you in the future.

You can bookmark the relevant doc page, and return to that whenever you need it. Instead you now have a buddy which you can bug and get answers in general, so you slowly wither your ability to do research, read more complex docs, and learn with collateral information (i.e. you learn something, and be aware of other features, so you slowly learn and internalize the whole subject).

Instead you ask, and get a nibble of information which is harder to connect to other bigger corpus you might have. You probably saved time this instance, but if you read the docs, you'll progressively spend less time on the subject, saving you tons of time down the road, plus you'll sharpen your ability to read docs and be faster at searching, reading and understanding them.


These arguments seem like they'd apply equally well to googling for the documentation to a specific function and then using the answer from the preview snippet (or clicking the link, reading exactly what you need, and then closing the tab without reading any further). I'm not saying that means they're wrong, but it's hard not to feel like it would be hyperbole to call that "rigging your foundations with explosives".


It's a matter of habit. I personally don't search for answers on the internet to begin with. I use Zeal/Dash to store docs of the tools I use locally, and directly read reference docs for what I need. If I need information from a very specific page, I always bookmark or take note of the specific page, and return to that automatically. I also always read beyond what I need to see whether I'm missing something or holding something wrong. If that's I use very frequently, I further document what I do, and how it works in my personal knowledge base.

In surface asking ChatGPT is not different using StackOverflow for everything, but I can argue that StackOverflow bears the same dangers, unless the answer given is comprehensive and written with collateral information in mind.

However, having a personal assistant on tap which can answer (or hallucinate) anything or everything will definitely make you lazy.


Here are the tools I will be using today:

F#, Jupyter, Python, R, MySQL, PostgresQL, ZSH (awk, sed, fill in the rest), VS Code, Excel, Word, git, and probably more.

This weekend I continued to work my way through Learn You a Haskell For Great Good, taking time to go over the functional programming concepts in both Haskell and F#. In addition I read most of Data Science at the Command Line, where I was introduced to ggplot2, so I then worked my through the online ggplot2 book.

I'm not very interested in memorizing the complete syntax, standard library, and common third-party libraries for the myriad of tools that I've listed above. I don't really see the point of learning how to read through the MySQL documentation.

Here's what I want to have in my brain: Most of Townes Van Zandt, Willie Nelson, and the Grateful Dead's catalog of songs so I can play them on acoustic guitar without a songsheet as I like to make eye contact with the audience.

So yeah, thank you for telling me that my approach is slowly withering my abilities.

I'll put this as nice as I can: You are very judgmental.


Since you seem a bit touchy about the feedback you specifically requested, I'll take a tangent:

How are you finding ChatGPT for functional stuff? I found it to be unusably bad, unable to transform trivial programs. Have you found it helpful for Haskell or F#?


Oh, I have found it very helpful.

FWIW, it takes practice to get good at using ChatGPT. You have to direct it in certain ways and break up problems into bite sized chunks to do complex things like, eg, assist in writing a language server for a custom DSL using FParsec. If you don’t have a higher level understanding of the task and don’t break things into smaller tasks it won’t work very well!


> I'll put this as nice as I can: You are very judgmental.

Thanks for your direct and honest view. No hard feelings here.

Let me tell you. I know C, C++, Go, Java, bash, Eclipse IDE, git, Docker, Saltstack, Terraform, OpenStack, Kubernetes, some MATLAB and probably some other tools I forgot that I know, and I manage a big fleet of servers while I'm writing this.

I don't "remember" the syntax of anything. I somehow internalized them. I don't think about them. If I make a mistake, my text editor (which is NOT VSCode) politely tells me about it.

I also used to remember double bass parts of Bizet, Beethoven, symphonies of local composers, plus tons of songs, because while I had the sheets in front of me, I had to listen tubas & percussion to make sure that I'm in sync with them and watch the conductor to double-check the metronome in my head and get the tone cues if he's not happy with our tone. To make sure that our 100 person orchestra was playing at its peak performance I had to make sure that I know every kink and chicane of the traffic for our specific arrangement.

In these days I remember tango songs' traffic because I have to plan my figures 2-3 sentences ahead while dancing in a crowded hall.

Oh, I sometimes play a couple of songs in my bass guitar if I have time from other activities.

So, yeah, thank you for telling me that I'm judgemental.

I'll put this as nice as I can: The choice is yours, but you're underestimating your abilities. Plus, people who like to read docs and write code the hardcore way are not dorks.

Have a nice day.


You're not giving up your ability to reason, you're just reasoning at a higher level. Think of AI like an employee, CEOs don't lose their reasoning ability because they have employees to make their directives into reality.


Are you sure about that? :-)


based on what I've seen of CEOs, i wouldn't be so sure of that


Seems like a choice to me, one that you can choose not to make. The first choice is choosing to see AI as a doomsday device


Depends what the alternative is. If on average people would lose that fleeting thought and never dig into it or research it more it can be a huge net positive to write them down, even disorganized and having a history of what you thought about. And actually debate the thought with the AI and get an outside perspective.

On the other hand if on average people would in fact dig into them, you might lose the ability to organize your thoughts yourself over time, but I always think that sort of fear is overblown.


I have found that writing everything down eroded my ability to remember a couple of important tasks for the day, so I started to exercise that part of the brain to get the ability back.

Any capability of the brain not exercised will certainly wither. While brain consumes the most energy, it tries to minimize it at the same time. Using AI extensively to accomplish mundane tasks will rob you of the ability in the long run.

This is the law of the body. Use it or lose it.


> an increased willingness to defer my thought-organising process to the AI by just jotting down some random ideas and asking it to make it coherent

...and then accepting that outcoming as sufficient, and moving on, right? Is the alternative that without the AI, you would spend more time researching and thinking about your ideas, pursuing (or stumbling upon) related ideas, and maybe ending up with a more broad understanding at the end of the endeavor?

If so, working with a chatbot is good to extent that you value economy. That is a worthy outcome in many circumstances. But the "inflated" alternative sounds valuable in its way too.

From a broad social/behavioral perspective, I wonder the extent to which the "inflated" way will die off (or perhaps adapt and become something new?) given these new tools.


> > jotting down some random ideas and asking it to make it coherent

Isn't this is more or less the same thing as spending a bunch of time going to a bunch of webpages from Google search, except it takes way less time, arguably leaving _more_ time for the critical thinking and subsequent research you espouse?

> ...and then accepting that outcoming as sufficient, and moving on, right?

Much like you can either immediately trust what you read on the Internet, or apply some critical thinking and dig into it more, you can do the same with whatever an AI returns to you.

AI can be an accelerator for mundane tasks and help you get to the high-value work quicker, or it can be a lazy shortcut that lets you do less.

The same can be said for almost any tool.


You can't separate Will from attention, attention spans or human initiative was already shattered way before AI, which is now the best hope for more people to leverage time and control to expand their minds. I'm sure that centaur-mind will be the norm soon, if it really is as good as it promises to be it won't be needed for long.


This goes beyond flattening. Given that only a handful of companies have the financial means to do AI at scale, and that those financial means are provided mostly by ads, I think that allowing these systems to essentially "think" for people like this will be unbelievably destructive for society long-term.


The URL contains a JWT token which is a CWE-598 security weakness of the application. Reference: https://owasp.org/www-community/vulnerabilities/Information_....


Haha, I know. As soon as I saw it, I decoded it and saw

   {
  "aud": "stratechery.passport.online",
  "azp": "HKLcS4DwShwP2YDKbfPWM1",
  "ent": {
    "uri": [
      "https://stratechery.com/2024/the-great-flattening/"
    ]
  },
  "exp": 1718188732,
  "iat": 1715596732,
  "iss": "https://api.passport.online/oauth",
  "scope": "feed:read article:read asset:read category:read entitlements",
  "sub": "WsrLyrr6qemVAgEGCjMm34",
  "use": "access"
  }
Not sure who user WsrLyrr6qemVAgEGCjMm34 is, but thanks for sharing the article with us all!

At first glance, looks like passport.online is a subscription management service: https://passport.online/


Pretty sure that Passport is Ben Thompson's (of Stratechery) own subscription management service. Not sure it is out in the world as a product yet.


Given that the token says it only allows reading of content and assets of this particular article for 1 month, it seems like this is an intentional feature for allowing subscribers to share paywalled URLs


Indeed, the same mechanism is used by Bloomberg for gift links. A signed JWT with expiry 7 days from creation. A fitting use case in my opinion.


I guess it's from newsletter's "share" link or something. This article itself isn't paywalled. https://stratechery.com/2024/the-great-flattening/


Can we update the URL then? OP probably did that to get around that it was a dupe, already submitted. :/


It's time locked so no one can later assess his hot takes with the clarity of time.



"The Internet, birthed as it was in the idealism of California tech"

The moment CERN is transmogrified to California. One could argue the internet pre-WWW existed (I joined the internet myself on IRC and NEWS) but then, who today was on the internet pre-WWW?


> I joined the internet myself on IRC and NEWS

And email, right? You do have email, don't you? Which is not implemented over the WWW, and is pre-WWW by the best part of a decade. (RFC 821/822 are dated August 1982, and the first publicly documented version of HTTP 0.9 is, I think, 1991?)

I think it's pretty safe to say "The Internet", without any other qualification, came into being with the publication of the IPv4 RFC 791 in September 1981, by Vint Cerf and Robert Kahn.

Now, California might not be entirely accurate, as while a fair amount of the work on the Internet Protocols was done at UCLA and UC Berkley, MIT was also heavily involved along with co-ordination from DARPA's IPTO in Virginia (specifically, The Pentagon). But I think "California" is a much more accurate birthplace for The Internet than CERN.

Edit: I'm guessing you also use the not-built-on-WWW DNS system from RFCs 882/883 published in 1983?


I can't speak about 1981 or 1983, I came to the internet ~1990. I just learned coding in 1981 and wrote some games in 1983 I guess.

Funny fact, I didn't have email - or at least didn't use it a lot outside university, cross organisations. I mostly used Usenet/NEWS to interact with people outside our university.

This was Germany ~1990 and there were no domains yet in common use. Emails used a complicated addressing scheme and it was unclear how to reach people or find emails (there was no WWW to check a website for an email). You might find them in signature of Usenet posts if at all.

"DNS system"

Same here, e.g. for FTP we used a text file with IP addresses of FTP servers (for downloading Linux and Aminet stuff)

Not sure this was specific to Germany, but the domain name system really came around with the WWW. I even wrote the code that took owning a domain in Germany (I worked at an ISP) from $1000/year (if I remember correctly, wished I had bought Schmidt.de when it was available, even with the price tag) to $1/month with email (the company was called Strato and wanted to launch cheap web+domain+email, the idea was to auto create web+email config from an Oracle database instead of manual configuring servers) for the masses.


Don’t forget BBN. The TCP/IP code in BSD was originally from them. Let’s call it a prototype.


BBN got a lot of the initial work, as well as built the initial infrastructure including routers.

BSD got a contract to port TCP/IP to Unix because DoD found themselves facing that the main system they worked with TCP/IP on was discontinued.

Thus a grant to port the stack to Unix and make it available as wide as possible.


I was on mobile and didn't want to type this out:

I was there when Bill Joy, Sam Leffler, and Bob Henry (not Robert R Henry) were crowed around a huge HP terminal in the common area of the 5th floor of Evans Hall. I used the public/common terminals for my work and was sitting there one day when the trio walked in a were excitedly talking about the TCP/IP code from BBN. I have no idea what they were doing, but I think they were trying to build it.


Generally a lot of networking code evolved on the big 36bit machines. There were implementations for unix that predated BSD sockets, but AFAIK Berkeley got a grant to make an implementation that would be freely accessible when TOPS-20 future was indicated to be in jeopardy - or it might have been just because before AT&T being able to demand licensing fees, everyone was putting out clones of Unix on various hardware.


I'm pretty sure Berkeley (or UC?) had a System V license and was allowed to redistribute code based on it in BSD. That became a point of contention later, when the *BSDs started to become popular. This is all very old stuff so I might have the details wrong.

It was definitely a fun time to be at UCB. I remember getting the early Sun workstations and porting stuff to them. It was a really good port of BSD, but that Motorola chip was a problem for some things. I ported the Franz Lisp compiler (liszt) to it.


Gopher and Usenet forever!


> Gopher

From the University of Minnessota in California.


<raises hand>

If you haven’t used telnet on a Gandalf PAX via RAL SPAN gateway to get to a talker then you haven’t lived.


I spent the summer of 1972 programming via the ArpaNET. I had no school computing budget to use, so I rented a $12/mo 300 baud dialup modem and dialed into the UCLA TIP, which allowed me to spend the day programming PDP-10 assembly on Harvard's machine. Harvard had no (free) storage, so in the evenings I would send my files to Stanford's PDP-10, conveniently using the disk space of a professor on sabbatical.


Agreed, found this to be a cute distillation. Give Utah some credit too…


Hmm I wasn't on the Internet but was active on BBSes since '87, and shortly after on the worldwide X25 network (tymnet, telenet, datapac etc), does it count?


Some of us were.


> specifically, will AI be a bicycle that we control, or an unstoppable train to destinations unknown?

I am more worried about AI becoming a tank - an instrument only the approved agents of the state are allowed to possess and which is used to flatten dissent.

Look at the push for regulations in the name of “safety”. It is more about control than safety.


I'm not sure I see the point. It's not hard to see that humans almost everywhere and in almost every point in history organized (voluntarily or otherwise) into those with more agency and leverage and those with less.

Most people are content to find a comfort zone and continue in that for decades at a time as long as conditions are bearable. Some people find themselves both able and willing to make tough decisions that affect many other people. This could be through compassionate leadership, dispassionate but mutual agreement, or even violent oppression, and I think the people involved care more about that context than whether certain human-computer interfaces are involved.


The tendency to compare Meta and Apple offerings is always at least a little unfair. Given that this type of product and ecosystem is going to exist now, Apple did everything they reasonably could to avoid the problem of people being isolated behind their glasses, even to the point of introducing more weaknesses in a product because they felt it was worth pointing this trend in a certain direction.

The simulated face on the outside is a perfect example. By all accounts, it doesn't work well. They knew it didn't work well, and surely somebody had to say that the product was worse off including it. But clearly the decision was that they'd rather make a product that isolated people less than making a product that was more polished.

There were other VR & AR headsets for many years before Vision Pro; how come there weren't viral videos of people wearing them going about their day in city streets? Of the two, which is more like WALL-E with people reclined in total immersion and which is at least trying to head in the direction of AR being a seamless part of every day life?

Of course it also has an immersive mode, because you can't seriously make a product like this without accounting for that use case. It just shouldn't be the only way the product can ever be used.

Being usable without any controllers is also a significant way that it keeps you in touch with your surroundings. You're not giving up normal interactions in exchange for VR ones; you're gaining AR interactions in addition to all of your normal physical interactions. I think most people don't appreciate just what a radical difference this can be.

I think part of why the Vision Pro launched when it did with the strategy and philosophy it did is specifically so that the market doesn't default to how Meta does things. Even if the Vision Pro doesn't sweep the world like the iPhone did, the future will be better if we have examples of how to balance the potential of VR with the potential of AR.

Disclaimer: I don't actually have a Vision Pro and a lot would have to change before I'd consider buying it. I'm just explaining why I think they deserve credit for good intentions, and why, given the market is going to contain products like this now, I'm glad the Vision Pro is one of them.


I think the ad was uncharacteristically tasteless but I still don’t understand the leap from “well that isn’t convincing me to buy their product” to “I’m personally offended and my day is ruined. Someone must apologize to me.”

Is this just the “I need validation by other people liking what I like” thing we see a lot in gaming, sports, etc. fandoms?

What I’m also curious about, given how shocking it is for Apple to totally miss the mark, is if this was calculated.


> the reason why people reacted so strongly to the ad is that it couldn’t have hit the mark more squarely.

I think this is right. The ad is about the flattening of the old analog world into the digital one and it evokes strong feelings because our nostalgia for that is not a little about a real and significant loss, which the Apple commercial grotesquely celebrates.


It's more than just nostalgia.

The ad visibly destroys items associated with creativity and culture, celebrates such destruction, and replaces it with an iPad. It reminded me of that scene in Pinocchio where the boys trash the mansion and the piano in it, it reminded me of book burnings. It could easily be understood as "screw the old world, it's all in our product now."

And to a demographic of Apple buyers who actually tend to like culture, who are more likely than the average to attend actual concerts, visit a gallery or enjoy their vinyl collection, that was highly, highly offensive.

Contrast that with Steve Jobs "It is a music player, it is a computer, it is a phone" skid from the iPhone introduction: Back then, it was not about replacing music players, it was a music player. The product was a tool, not the end. With the ad, the product is the end, not the tool.


I didn't consider it offensive but I did not like it at all. My initial reaction was that Apple come across as bunch of ignorant wankers. That's just a visceral reaction but the more I thought about it the more I thought that it explains their behavior. They have become so lost in their own world that even though they are clearly intelligent they completely missed that people would have this reaction. (Even if they meant it to be arch or ironic it still shows they are out of touch. You know you are out of touch when you thought you were being clever if someone says: "I don't find that funny")

As Alan Kay said "A change in perspective is worth 80 IQ points." Apple lost their perspective. The worst thing about it is that it plays into a very old caricature of tech geeks - giving everyone a bad name.


The issue is not so much destroying some hardware. It's more the lack of knowledge about the use of these tools.

When does Apple expect a Vivaldi concert to feature the pianist only turning up with an iPad instead of a Grand? Same for all the tools on display in the add.


I think the point is this, most people in this world can not and will not ever be able to afford a grand piano, however, they can afford an iPhone or iPad which lets them write, create, produce music.


Perhaps marking a turn of Apple considering itself the center of an ecosystem to the center of the ecosystem, trapped within an echo chamber of its own hall of mirrors?


20 odd years ago I worked for a bank.

We were doing a focus group that went slightly off the rails and a room full of people all agreed that the bank must still have paper records.

The bank had not had paper records for decades.

Im not sure that "nostalgia" is the right word...


> but I still don’t understand the leap [...] to “I’m personally offended and my day is ruined. Someone must apologize to me.”

This comes across as rather belittling to me - akin to calling someone a "snowflake" for not acquiescing to someone else's version of reality. It is possible to find the theme of the ad to objectionable in a way that transcends the personal: for example, out of concern for the possible consequences of coarsening commercial speech, or of promoting the devaluation of human creativity. I really don't think that many of the critics were demanding a personal apology.


> I still don’t understand the leap from “well that isn’t convincing me to buy their product” to “I’m personally offended and my day is ruined. Someone must apologize to me.”

I see a lot of comments expressing the former, and a lot of comments reacting against the latter. Are you sure that's an accurate representation of the reaction?

> Is this just the “I need validation by other people liking what I like” thing we see a lot in gaming, sports, etc. fandoms?

Internet forums? ;)


I suspect a lot of people are actually kind of angry at themselves, but want to blame the ad for making them angry at themselves.

How many people have a painting set, or a musical instrument, and dream of getting good at it but just can't be bothered because they'd rather watch ads online?


I found The Ad to evoke feelings of crushing power and being demolished by an actual moloch.

I don’t care for the instruments, but the sheer physicality of it, the dominating, destructive awesome power does linger.

It captures Apple and all big tech beautifully: pure, raw, admittedly impressive power that will swallow everything in its path. I found it one of the more (unintentionally) honest ads in years.

Not sure if society needs such presences, but that’s another discussion.


A flattening I’m watching with interest is the gradually increasing digital integration of the physical world. I’m hopeful that it opens the door to at least tap the brakes on the enshittification of our physical artifacts.

For example, there are landfill dumps-ful of mass produced electrical devices like shavers, toothbrushes, AirPods and so on tossed out for want of just a replacement battery made too difficult to replace by those mass consumers. But I’m seeing green shoots of hope in people leveraging the flattening by sharing how-to hacks to repair these, and in rare but hopefully increasing number of cases, improve them for longer service lives.

With gains in pushing CNC mills, wire EDM and more fabrication to small scales, big chunks of which are enabled by this flattening, I’m cautiously optimistic we will be able to distribute persisted, good engineering.


It’s gross that people spend so much effort repairing things.

Manufacturers could simply publish this info.

Water got into a medical thermometer I had. Claimed some water resistance protections. Quickly drained the battery. Managed to open it up (with difficulty — pry tab located under label) but inside was corroded. Saw the battery but not easy to get out. Of course peeling the label makes it not stick back properly either.

What should be a cheap and easy battery change was not. And for what should be a cheap battery is over $5 at the local grocery store. Makes buying another $10 thermometer look a good deal.

It shouldn’t be this hard…


This article made me think of something. Does anyone else find themselves trying to revert back to doing certain activities the "pre-smartphone" way?

Examples:

- Radio stations and records instead of streaming

- Longform, newspaper-like text (this site) vs. all video all the time

- Finding ways of reducing phone usage, ideally to zero?


At this point im convinced Apple is commissioning these prices to get people to watch their ad.


So.. what's the problem again?


It definitely feels that people just NEED to be angry about SOMETHING these days...


Marketing doesn't happen in a vacuum. The creative industries have been having it rough for the past year and a half, and there's practically an internet war going on between AI prompt using artists and real ones. Add to that the writer's strike, the job market, etc.

If they had a proper control screening for the ad, the backlash could be anticipated, as shown by the fact on how the simple act of reversing it would have a much different message/effect.


I think it may be the opposite.

People are mad at a culture and economy that is increasingly dehumanizing and exploitative, where they feel hopeless when contemplating basic things like ever having a permanent home for their family, and when every interaction with modern commerce seems to be designed to trick them or steal from them.

Since they feel (and to some extent, are) powerless to affect the forces acting on them directly, they lash out at anything that reminds them of this dynamic, in whatever forum will listen to them.


No, this isn't it. People aren't mad at the people dehumanising them. In fact people celebrate those people en mass.

Facebook, Google, etc. would all disappear if what you said were true. Politicians would be voted out, etc.

The truth is, people don't mind being dehumanised if it can make them feel more like their neighbours, or even better, above them. We're all too lazy to care about being dehumanised, like pigs to slaughter.

People lash out not because they feel helpless, but because they feel superior. They're not, they're just bored and life has been good so things they invent to get upset about look extremely bad to them.


Many people consider it a possibility to like something. So equally, it's completely valid to dislike something.


Nobody takes an L in the marketplace of ideas anymore. If people don't like something that you do, you can just say they have a type of derangement syndrome, or it's because they have a need to be angry about something, or because they're afraid of change, or so on. The key is to make sure that your accusation is broad and vague.


Yes, all of those creatives who have been losing their jobs over the past few years to this "flattening" process and are now seeing a existential threat from AI that could eliminate most of their careers entirely just "NEED to be angry about SOMETHING".

The lack of empathy is astounding...


It’s been a constant since 2016.

Tho the article has some interesting ideas and historical insights.


Life is so good that we get bored, so we get upset at things that are perfectly fine because we know we can control how we feel and stop at any time.

Hardship keeps people busy, boredom forces people into drama.


People have been "angry on the internet" for decades, but it's more recently that it's metastatized into the "real world" on an overwhelming level, and been monetized. This is grossly irresponsible and is going to result in violence and pushbacks, of which the US ""TikTok ban"" (not a ban) is a small foreshadowing.


Please, I beg you, pick up a history book.

Culture warriors inciting violence against popular culture https://pophistorydig.com/topics/burn-the-beatles-1966/

Or, you know, literally anything from the Children's Crusade to the War of Jenkin's Ear. Raising a rabble to violence has been going on forever.


Just because it's always happened doesn't mean that present day complicity in it is fine.


I'd forgotten about the Aggregation Theory piece.

"By extension, this means that the most important factor determining success is the user experience: the best distributors/aggregators/market-makers win by providing the best experience, which earns them the most consumers/users, which attracts the most suppliers, which enhances the user experience in a virtuous cycle."

Sadly, the reality of the virtuous circle though is: enshittification.

"...from music to video to books to art; the extent to which being “special” meant being scarce is the extent to which the existence of “special” meant a constriction of opportunity"

I don't get this. Rarity or inaccessibility has been used as marketing tool, sure. But great music and books were not scarce for a long time pre-internet. Feels like a "never mind the quality, feel the width" view of culture.

"LLMs are breaking down all written text ever into massive models that don’t even bother with pages: they simply give you the answer."

When will people stop saying that? The give an answer, yes, but is it the answer: caveat emptor.


> But great music and books were not scarce for a long time pre-internet.

For some maybe. When I was first learning to program I would drive to the local bookstore and copy down code from books out of their very small tech section. I couldn't afford to buy any of the books at the time (out of the very small selection), and they were too new for the local library to carry. Now, I can learn about almost _anything_ for free within a few clicks.

Music was similarly gated, but more so by lack of money than overall access.


> Sadly, the reality of the virtuous circle though is: enshittification.

I had the same thought; where does this fit into Ben's idea that user experience trumps everything?


> the ad resonated so deeply is that it captured something deep in the gestalt that actually has very little to do with trumpets or guitars or bottles of paint [..] - everything is an app.

I feel like the author -almost- has it here. It's not that everything is an app, it's that all tasks and activities are performed and accessed via a glowing screen of varying sizes. It really does flatten the experience, literally and figuratively.

As an amateur "contractor", I've got a shed-full of tools; each of these tools serves a rather specific purpose. Most of them are (conceptually) old and complete, and their physical form closely reflects their domain. A brute sledge hammer next to a set of mini screw drivers, a sharp Japanese chisel of folded steel, a relentless bulldog of a sawzall with a multi-purpose blade. You hold a tool and it resonates with the type of work you've done with it in the past.

Using these tools, a person feels extended in a physical sense. It's true, Jobs's little big epiphany that "humans are tool makers" - we're very little without the tools we've made. But the tools we made in turn shape us. The old adage "when all you've got is a hammer, everything looks like a nail". When all you've got is a smooth screen, everything looks like rainbow puke.

The great flattening robs us of the universality of tool-making and tool-using, in a big part because we've created these universal tools that serve a myriad purposes in their one form. Sure, the markings on the screen are different every time. But look from away, and all you see is a person hunched over a glowing rectangle for endless hours; replace that object with anything else and the scene becomes grotesque and tragic.

At work the other day I joked that an LLM is a "data laundering service" -- and for the most part, that it primarily is. Washing away any licences and attributions, it mostly preserves the coherence of knowledge it's trained on. Ask it something and anything, and it will render it for you. Now we're in the realm of conceptual flattening, now the universality has begun to interface with our minds directly, it has begun to replace and therefore atrophy and rob us of some of the prime abilities that let us construct tools in the first place.

It is a terrifying thought that we are heading into a future where people outsource their reasoning and creativity to tools. We're giving up something inherently human, but we've also (and we have been for at least a century) climbed so high onto the shoulders of giants, we've long ago lost sight of the ground. A vision of a world looms, post-collapse, where worshippers line up to consult oracles; the last vestiges of before-the-fall tech, remaining repositories of knowledge we've given up, still functioning, but now pure magic to those who attend them.


Yes what a great observation.

The ad is so shocking because it's the truth.

We all know it.


Do you really think any of the professionals actually using the tools being destroyed in the add actually are dropping their tool and using an iPad instead?

May I propose to give Damon Albarn's aka Gorillaz "The Fall" record a listen? This was indeed only produced on iPads, from a world-known musician, so it's the perfect test case for this, and I quote you here: shocking truth!


Reject it. Go analog. Enter the world. You'll be happier and your life will have meaning.


Hard to take this advice seriously from someone on HN.

Vast majority of people here owe their wealth to “going digital” and then seem to love telling other people to do the opposite.

Computers are pretty great. The only people who really (think they) want to go back to 1980 are the people who were 16 in 1980.


I feel like I straddle the spectrum and would agree with op. I grew up in the 90s, worked in tech all my life and will tell you about the wonderful time and amazing experiences I have with people who quite literally live in VR

But I also recognise that the happiest most alive times in each year are when tech goes away and people are the focus. When power is out and we all gather to play board games, when we gather in the countryside and talk until the camp fire burns out.

Inside of tech and out I’d encourage people to focus on their connection with others if they feel something missing in their life


I'd take a compromise and go back to '99. Sure, we had punch-the-monkey ads but social networks had barely become a thing, it was all blogs and forums for the most part. Plenty of reasons to go outside without filming the endeavor, people were ok with not being able to reach you every minute.

But I was also 16 in the late 90s so you probably have something there.


> But I was also 16 in the late 90s so you probably have something there.

Whenever a man talks about the ideal time for __________, you can have a very high level of certainty that he was ~16-22 years old during that time period.

It's roughly 8-10% of his life, but it'll be correct about 90% of the time.


They said happiness, you said wealth. You can both be right.


"Like today, the industrial revolution included a period of time that saw many lose their jobs and a massive surge in inequality. It also lifted millions of others out of sustenance farming. Then again, it also propagated slavery, particularly in North America."

Slavery existed long before the industrial revolution and still exists to this day. Talk about being grossly ignorant of not just history but of today too.


I think that it might be accurate to say that it propagated slavery in a unique way even if it didn't start it. I.e. would slavery have been the same institution in the US without an international slave trade and industrialized cotton processing & manufacturing?


Look up the Triangle Trade. An international slave trade was the status quo for centuries before the Industrial Revolution.


That's what "propagate" means...




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: