> It's early days and nobody knows how things will go, but to me it looks that in the next century or so
How is it early days? AI has been talked about since at least the 50s, neural networks have been a thing since the 80s.
If you are worried about how technology will be in a century, why stop right here? Why not take the state of computers in the 60s and stop there?
Chances are, if the current wave does not achieve strong AI the there will be another AI winter and what people will research in 30 or 40 or 100 years is not something that our current choices can affect.
Therefore the interesting question is what happens short-term not what happens long-term.
I said that one hundred years from now humans would have likely gone the way of the horse. It will be a finished business, not a thing starting. We may take it with some chill, depending on how we value our species and our descendants and the long human history and our legacy. It's a very individual thing. I'm not chill.
There's no comparing the AI we have today with what we had 5 years ago. There's a huge qualitative difference: the AI we had five years ago was reliable but uncreative. The one we have now is quite a bit unreliable but creative at a level comparable with a person. To me, it's just a matter of time before we finish putting the two things together--and we have already started. Another AI winter of the sort we had before seems to me highly unlikely.
I think you severely underestimate what the 8 billion human beings on this planet can and will do. They are not like horses at all. They will not allow themselves to be ruled by, like, 10 billionaires operating an AI and furthermore if all work vanishes then we will find other things to do. Just ask a beach bum or a monk or children in school or athletes or students or an artist or the filthy rich. There _are_ ways to spend your time.
You can‘t just judge humans in terms of economic value given the fact that the economy is something that those humans made for themselves. It‘s not like there can be an „economy“ without humankind.
The only problem is the current state where perhaps _some_ work disappears, creating serious problems for those holding those jobs.
As for being creative, we had GPT2 more than 5 years ago and it did produce stories.
And the current AI is nothing like a human being in terms of the quality of the output. Not even close. It‘s laughable and to me it seems like ChatGPT specifically is getting worse and worse and they put more and more lipstick on the pig by making it appear more submissive and producing more emojis.
When you have exponential growth, it's always early days.
Other than that I'm not clear on what you're saying. What is in your mind the difference between how we should plan for the societal impact of AI in the short vs the long term?
Is it early days of exponential growth? The growth of AI to beat humans in chess and then Go took a long time. Appears to be step function growth. LLMs have limitations and can't double their growth for much longer. I'd argue they never did double, just a step function with a slow linear growth since.
The crowd claiming exponential growth have been at it for not quite a decade now. I have trouble separating fact from CEOs of AI companies shilling to attract that VC money. VCs desperately want to solve the expensive software engineer problem, you don't get that cash by claiming AI will be 3% better YoY
> When you have exponential growth, it's always early days.
Let‘s take the development of CPUs where for 30-40 years the observable performance actually did grow exponentially (unlike the current AI boom where it does not).
Was it always early days? Was it early days for computers in 2005?
In a way, yes, I think that rise smartphones and the resulting proliferation of cheap mobile processors was ground-shattering in terms of what we consider to be a computer. If I'm reading the numbers correctly, there are about 2B people with their own pc/laptop while there are about 5B people with their own smartphone. So 60% of people using computing do so on a type of device that didn't exist in 2005.
And this is before we talk about new modalities that might take over in the future, like VR/AR, wearable AI "pins" (which is what I assume that Johnny Ive is working on), neural interfaces, and maybe even quantum computing. So I personally don't see any issue of thinking of what happened 20 years ago, and what's happening at the moment as "early days for computers".
This is of course a semantics argument, but in my mind "early days" is a term for the time before things stabilized, and I'm very unclear on what that means for a field that's continuously undergoing rapid growth. As another extreme case - what would you say were the "early days" for life on planet Earth? I assume that most people wouldn't be thinking of just the first 50 years after the first RNA molecules got enclosed in a membrane.
How is it early days? AI has been talked about since at least the 50s, neural networks have been a thing since the 80s.
If you are worried about how technology will be in a century, why stop right here? Why not take the state of computers in the 60s and stop there?
Chances are, if the current wave does not achieve strong AI the there will be another AI winter and what people will research in 30 or 40 or 100 years is not something that our current choices can affect.
Therefore the interesting question is what happens short-term not what happens long-term.