>We are not there, yet, but if AI could replace a sizable amount of workers, the economic system will be put to a very hard test. Moreover, companies could be less willing to pay for services that their internal AIs can handle or build from scratch. Nor is it possible to imagine a system where a few mega companies are the only providers of intelligence: either AI will be eventually a commodity, or the governments would do something, in such an odd economic setup (a setup where a single industry completely dominates all the others).
I think the scenario where companies that own AI systems don't get benefits from employing people, so people are poor and can't afford anything, is paradoxical, and as such, it can't happen.
Let's assume the worst case: Some small percentage of people own AIs, and the others have no ownership at all of AI systems.
Now, given that human work has no value to those owning AIs, those humans not owning AIs won't have anything to trade in exchange for AI services.
Trade between these two groups would eventually stop.
You'll have some sort of two-tier economy where the people owning AIs will self-produce (or trade between them) goods and services.
However, nothing prevents the group of people without AIs from producing and trading goods and services between them without the use of AIs. The second group wouldn't be poorer than it is today; just the ones with AI systems will be much richer.
This worst-case scenario is also unlikely to happen or last long (the second group will eventually develop its own AIs or already have access to some AIs, like open models).
If models got exponentially better with time, then that could be a problem, because at some point, someone would control the smartest model (by a large factor) and could use it with malicious intent or maybe lose control of it.
But it seems to me that what I thought time ago would happen has actually started happening. In the long term, models won't improve exponentially with time, but sublinearly (due to physical constraints). In which case, the relative difference between them would reduce over time.
Sorry this doesn't make sense to me. Given tier one is much richer and more powerful than tier two, any natural resources and land traded at tier two is only at mercy of tier one not interfering. As soon as tier one needs some land or natural resources from tier two, tier two needs are automatically superseded. It's like animal community bear human civ
The marginal value of natural resources decreases with quantity, and natural resources would only have a much smaller value compared to the final products produced by the AI systems. At some point, there would be an equilibrium where tier 1 wouldn't want to increase it's consumption of natural resources w.r.t. tier 2 or if they did they'd have to trade with tier 2 at a price higher than they value the resources.
I have no idea what this equilibrium would look like, but natural resources are already of little value compared to consumer goods and services.
The US in 2023 consumed $761.4B. of oil, but the GPD for the same year was.
$27.72T
There would be another valid argument to be made about externalities. But it's not what my original argument was about.
I thought the assumption is that tier two has nothing to offer tier one and is technologically much inferior due to tier one being AI driven. So if tier one needs something from tier two I don't think they need to even ask. Wrt market equilibrium. Indeed i think it will be at equilibrium with increasing cost of extraction so indeed they will not spend arbitrary amounts to extract. But this also means probably there will be no way way for tier two to extract any of the resources which tier one needs at all bc the marginal cost is determined by tier one
> So if tier one needs something from tier two I don't think they need to even ask
You mean stealing? I'm assuming no stealing.
> But this also means probably there will be no way way for tier two to extract any of the resources which tier one needs at all bc the marginal cost is determined by tier one
If someone from tier 2 owns an oil field, tier 1 has to pay them to get it at a price that is higher than what the tier 2 person values it, so at the end of the transaction, they would have both a positive return. The price is not determined by tier 1 alone.
If tier 1 decides instead to buy the oil, then again, they'd have to pay for it.
Of course, in both these scenarios, this might make the oil price increase. So other people from tier 2 would find it harder to buy oil, but the person in tier 2 owning the field would make a lot of money, so overall, tier 2 wouldn't be poorer.
If natural resources are concentrated in some small subset of people from tier 2, then yes, those would become richer while having less purchasing power for oil.
However, as I mentioned in another comment, the value of natural resources is only a small fraction of that of goods and services.
And this is still the worst-case, unlikely scenario.
OK let's assume no stealing (which is unlikely). I think the previous argument was a little flawed anyhow, so let me start again.
I mean fundamentally if tier 2 has something to offer to tier 1, it is not yet at the equilibrium you describe (of separate economies). I think it's likely that tier 2 (before full separation) initially controls some resources. In exchange for resources tier 1 has a lot of AI-substitute labor it can offer tier 2. I think the equilibrium will be reached when tier 2 is offered some large sum of AI-labor for those resource production means. This will in the interim make tier 2 richer. But in the long run, when the economies truly separate, tier 2 will have basically no natural resources.
This thing about natural resources being small fraction is current day breakdown. I think in the future where AI autonomously increases efficiency of the loop which makes more AI-compute from natural resources, its fraction will increase to much higher levels. Ultimately, I think such a separation as you describe will be stable only when all natural resources are controlled by tier 1 and tier 2 gets by with either gifts or stealing form tier 1.
> Ultimately, I think such a separation as you describe will be stable only when all natural resources are controlled by tier 1 and tier 2 gets by with either gifts or stealing form tier 1.
For that to happen tier 1 would have to buy all of the resources from tier 2. Would you sell your house and be homeless so that you can have a highly efficient humanoid robot? I don't think so. And sooner or later, what tier 2 would want from tier 1 is what they need to build their AIs, and then they'd be more similar to tier 1.
If tier 2 amounts to 95% of the population, then the amount of power currently held by tier 1 is meaningless. It is only power so long as the 95% remain cooperative.
In practice the tier 1 has the tech and know-how to convince the tier 2 to remain cooperative against their own interests. See the contemporary US where the inequality is rather high, and yet the tier 2 population is impressively protective of the rights of the tier 1. The theory that if the tier 2 has it way worse than today, that will change, remains to be proven. Persecutions against the immigrants are also rather lightweight today, so there is definitely space to ramp them up to pacify the tier 2.
This only works as long as people are happily glued to their TVs. Which means they have a non-leaking roof above their head and food in their belly. Just at a minimum. No amount of skillful media manipulation will make a starving, suffering 95% compliant.
I'm assuming no coercion. In my scenario, tier 1 doesn't need any of that except natural resources because they can self-produce everything they need from those in a cheaper way than humans can.
If someone in tier 1, for instance, wants land from someone in tier 2, they'd have to offer something that the tier 2 person values more than the land they own.
After the trade, the tier 2 person would still be richer than they were before the trade. So tier 2 would become richer in absolute terms by trading with tier 1 in this manner.
And it's very likely that what tier 2 wants from tier 1 is whatever they need to build their own AIs.
So my argument still stands. They wouldn't be poorer than they are now.
I think the bigger relief is that I know humans won’t put up with a two tiered system of haves and have nots forever and eventually we will get wealth redistribution. Government is the ultimate source of all wealth and organization, corporations are built on top of it and thus are subservient.
Having your life dependent on a government that controls all AIs would be much worse. The government could end up controlling something more intelligent than the entire rest of the population. I have no doubt it will use it in a bad way. I hope that AIs will end up distributed enough. Having a government controlling it is the opposite of that.
Why would this be worse than the current situation of private actors accountable to no one controlling this technology? It's not like I can convince Zuckerberg to change his ways.
At least with a democratic government I have means to try and build a coalition then enact change. The alternative requires having money and that seems like an inherently undemocratic system.
Why can't AIs be controlled with democratic institutions? Why are democratic institutions worse? This doesn't seem to be the case to me.
Private institutions shouldn't be allowed to control such systems, they should be compelled to give them to the public.
>Why would this be worse than the current situation of private actors accountable to no one controlling this technology? It's not like I can convince Zuckerberg to change his ways.
As long as Zuckerberg has no army forcing me, I'm fine with that. The issue would be whether he could breach contracts or get away with fraud. But if AI is sufficiently distributed, this is less likely to happen.
>At least with a democratic government I have means to try and build a coalition then enact change. The alternative requires having money and that seems like an inherently undemocratic system.
I don't think of democracy as a goal to be achieved. I'm OK with democracy in so far it leads to what I value.
The big problem with democracy is that most of the time it doesn't lead to rational choices, even when voters are rational. In markets, for instance, you have an incentive to be rational, and if you aren't, the market will tend to transfer resources from you to someone more rational.
No such mechanism exists in a democracy; I have no incentive to do research and think hard about my vote. It's going to be worth the same as the vote of someone who believes the Earth is flat anyway.
I also don't buy that groups don't make better decisions than individuals. We know that diversity of thought and opinion is one way to make better decisions in groups compared to individuals; why would there be harm in believing that consensus building, debates, adversarial processes, due process, and systems of appeal lead to worse outcomes in decision making?
I'm not buying the argument. Reading your comment it feels like there's an argument to be made that there aren't enough democratic systems for the people to engage with. That I definitely agree with.
> I also don't buy that groups don't make better decisions than individuals.
I didn't say that. My example of the market includes companies that are groups of people.
> We know that diversity of thought and opinion is one way to make better decisions in groups compared to individuals; why would there be harm in believing that consensus building, debates, adversarial processes, due process, and systems of appeal lead to worse outcomes in decision making?
I can see this about myself. I don't need to use hypotheticals. Time ago, I voted for a referendum that made nuclear power impossible to build in my country. I voted just like the majority. Years later, I became passionate about economics, and only then did I realise my mistake.
It's not that I was stupid, and there were many, many debates, but I didn't put the effort into researching on my own.
The feedback in a democracy is very weak, especially because cause and effect are very hard to discern in a complex system.
Also, consensus is not enough. In various countries, there is often consensus about some Deity existing. Yet large groups of people worldwide believe in incompatible Deities. So there must be entire countries where the consensus about their Deity is wrong.
If the consensus is wrong, it's even harder to get to the reality of things if there is no incentive to do that.
I think, if people get this, democracy might still be good enough to self-limit itself.
This doesn't pass the sniff test, governments generate wealth all the time. Public education, public healthcare, public research, public housing. These are all programs that generate an enormous amount of wealth and allow citizens to flourish.
In economics, you aren't necessarily creating wealth just because your final output has value. The value of the final good or service has to be higher than the inputs for you to be creating wealth. I could take a functioning boat and scrap it, sell the scrap metal that has value. However, I destroyed wealth because the boat was worth more.
Even if you are creating wealth, but the inputs have better uses and can create more wealth for the same cost, you're still paying in opportunity cost. So things are more complicated than that.
Synthesizing between you two’s thoughts, extrapolating somewhat:
- human individuals create wealths
- groups of humans can create kinds of wealth that isn’t possible for a single indovidual. This can be a wide variety of associations: companies, project teams, governments, etc.
- governments (formal or less formal) create the playing field for individuals and groups of individuals to create wealth
>governments generate wealth all the time. Public education, public healthcare, public research, public housing.
> These are all programs that generate an enormous amount of wealth and allow citizens to flourish.
I thought you meant that governments generate wealth because the things you listed have value. If so, that doesn't prove they generate wealth by my argument, unless you can prove those things are more valuable than alternative ways to use the resources the government used to produce them and that the government is more efficient in producing those.
You can argue that those are good because you think redistribution is good. But you can have redistribution without the government directly providing goods and services.
I think I'm more confused. Was trying to convey the idea that wealth doesn't have to limited to the idea of money and value. Many intangible things can provide wealth too.
I should probably read more books before commenting on things I half understand, my bad.
Those programs consume a bunch of money and they don’t generate wealth directly. They are critical to let people flourish and go out to generate wealth.
A bunch of well educated citizens living on government housing who don’t go out and become productive members of society will quickly lead to collapse.
None of these are unique to the government and can also be created privately. The fact that government can create wealth =/= the government is the source of all wealth.
Developing a model of the real world, or even just learning only a subset of self-consistent information, could be detrimental to the task of predicting the next token in the average text, given that most of the written information on many subjects could be contradictory and somehow wrong.
I don't know how they are doing RL on top of that, how they are using synthetic data or filtering them. But it's clear that even with GPT-5 they haven't solved the problem, as the presentation demonstrated with the very first prompt (I'm talking about the wrong explanation for lift produced by a wing).
Very funny. The very first answer it gave to illustrate its "Expert knowledge" is quite common, and it's wrong.
What's even funnier is that you can find why on Wikipedia:
https://en.wikipedia.org/wiki/Lift_(force)#False_explanation...
What's terminally funny is that in the visualisation app, it used a symmetric wing, which of course wouldn't generate lift according to its own explanation (as the travelled distance and hence air flow speed would be the same).
I work as a game physics programmer, so I noticed that immediately and almost laughed.
I watched only that part so far while I was still at the office, though.
A symmetric wing will not produce lift a zero angle of attack. But tilted up it will. The distance over the top will also increase, as measured from the point where the surface is perpendicular to the velocity vector.
That said, yeah the equal time thing never made any sense.
Of course, I'm just pointing out that the main explanation it gave was the equal transit time and added the angle of attack only "slightly increases lift", which quite clashes with the visualisation IMO.
I work in game physics in the AAA industry, and I have studied and experimented with ML on my own. I'm sceptical that that's going to happen.
Imagine you want to build a model that renders a scene with the same style and quality of rasterisation. The fastest way to project a point on the screen is to apply a matrix multiplication. If the model needs to keep the same level of spatial consistency as the resterizer, it has to reproject points in space somehow.
But a model is made of a huge number of matrix multiplications interspersed by non-linear activations. Because of these non-linearities, it can't map a single matrix multiplication to its underlying multiplications. It has to recover the linearity by approximating the transformation with many more operations.
Now, I know that transformers can exploit superposition when processing a lot of data. I also know neural networks could come up with all sorts of heuristics and approximations based on distance or other criteria.
However, I've read multiple papers showing that large models have a large number of useless parameters (the last one showed that their model could be reduced to just 4% of the original parameters, but the process they used requires re-training the model from scratch many times in a deterministic way, so it's not practical for large models).
This doesn't mean we might not end up using them anyway for real-time rendering. We could accept the trade-off and give up some coherence for more flexibility.
Or, given enough computational power, a larger model could be coherent enough for the human eye, while its much larger cost will be justified by its flexibility. In a way like analogous systems are much faster than digital ones, but we use digital ones anyway because they can be reprogrammed.
With frame prediction and upscaling, we have this trade-off already.
I once wrote a non-conservative generational GC for C++ just as an exercise, with the constraint that I could only use standard C++ (it was in C++17).
It worked based on a template type I called "gc_ptr<T>". So you could create one of these with the function template "make_gc(...)". This was the only way you could allocate a new garbage-collected object.
"make_gc" would check if a type descriptor for the type was already initialized and if not it would allocate a new one. It would then set a thread_local with the descriptor and the address of the current object being created.
Then it would call the object constructor.
If the type being instantiated had gc_ptr members, these would be initialized by their constructor. The gc_ptr constructor would, in turn, check the descriptor of the parent object and add a record to it representing the member. The record would store the member offset calculated from its address and the one of the parent object.
Garbage-collected objects would also use reference counting for gc_ptr(s) on the stack or inside objects that were not garbage-collected. It's easy to know if a gc_ptr is being constructed inside a garbage-collected object or not: If it is, then the code is also executing inside make_gc, so I just need a thread_local flag.
For the mark and sweep, I would use an atomic flag to mark the GC as running. If the flag was set, then all gc_ptr would stop the thread (by waiting on a mutex) as soon as some code would try to swap/reassign them.
This means that code that didn't touch them would be kept running during garbage collection.
I would start marking objects from those allocated in the custom arena I used that had a reference count different from zero.
I wanted to add containers that could allocate contiguous GC'ed objects by wrapping the standard containers but never finished that.
GC'ed containers of gc_ptr(s) just worked fine.
I just re-ran the benchmarks I wrote to compare it against the Boehm-Demser-Weiser GC. It is quite faster in doing the mark-and-sweep (up to an order of magnitude faster with thousands of objects and a bit less than 100 MBs of GC'ed memory in total).
However, because of the atomic check, it was slower in swapping pointers by roughly an order of magnitude.
The game is in early access. Steam clearly explains that when you buy an early access game you take the risk of it never being finished. I don't know much about the events or the promises behind this game but I woundn't say it's a scam just because it went bad. Social media are full of people calling it a scam. Am I missing something or is it just the usual social media nonsense?
Original marketing and Steam tags indicated it was supposed to be an Open World MMO, but on release, neither of those things were true; they also took steps very recently to remove both of those tags from the Steam page.
It's not a total scam IMO but the game is really boring and nothing like what was promised in the trailers. An early access game should still be fun even if it's unfinished.
It was meant to be a full release for PC and consoles in Nov 2023 (after having been delayed twice already), at which point they announced it would be delayed a 3rd time, until Dec 8th, and would be released as PC-only Steam Early Access content instead of a full game like they had led gamers to expect. The 2 years leading up to that point were spent releasing promo videos and info which grossly misrepresented both the game's content and the progress of its development.
I don't know that this technically constitutes fraud, but it is completely disingenuous and abusive of the trust gamers had placed in them.
I've seen people complaining about early access games before just because they never got released (hence my question). From the comments I get this case is a bit different though.
I recently read "The Machinery of Freedom". It makes a strong case for an anarco-capitalist society.
I'm not sure about either of these arguments. The most robust arguments, as far as I can tell, are those relative to common goods (where you can't exclude access to goods/services) and externalities. The book tries to answer most of these questions. While I'm not entirely sold (at lest not yet) I recommend the book. It's not a rant about the Government, instead it goes quite in depth by proposing what anarco-institutions might look like and what kind of society those institutions are likely to produce by looking at historical cases (like Medieval Iceland) and parts of the current system.
> you can't make these machine things without literally feeding this copyrighted information into them, therefore they do contain a copy.
They don't necessarily do. Think about that. You can take some copyrighted material and transform the information contained in it (for instance a fictional book). You can then write a summary. The summary contains information that was present in the original but it has been transformed and hence it's not a copy. The ML model contains information that has been generalized by some degree. So it's just a grey area IMO.
I'm not saying that "they do or don't objectively" because that doesn't matter as much as people think it does. I'm thinking of what a "jury" COULD decide. I think average joe on a jury is very likely to see that process as "feeding them in."
More over, you are clearly not in violation of copyright if you are talking about statistics about the material. In your example, printing out a "there were 7000 instances of the word 'the'" is certainly not a violation. A ML model is just a huge pile of these statistics.
However, saying "the first word of the book is 'The'" would not be a violation, while repeating that for every word in the book, as a whole, would be one.
I agree with you but I think it's important to have some nuance. Imagine I build a statistical model for 10-word sequences (10-grams) and then I trained it on a single book. I probably could pick some starting words and get most of the book back from the "statistics" I compiled. If I trained the same model on a giant dataset, the one book would just contribute to the stats.
All that to say, the models have potential to memorize, but they don't, and if they do it's an undesirable failure mode, not some deliberate copying.
I like this argument a lot; but again -- how does this play out in the real world? It's pretty easy to refute what will happen in real life. Think, e.g Batman. I could write a very new and original "Batman" comic that doesn't strongly resemble anything -- movie, toy, comic, whatever -- that exists, but would be recognizable to fans.
Once it starts doing well, will DC come after me? You bet.
These models can definitely be used to intentionally store and recall content that is copyrighted in a way that's not subject to fair use. (eg: trivially, I can very easily train a large model that has a small subnetwork which encodes a compressed or even lossless copy of a picture, and if I were to intentionally train a model is that way then this would be no less a copyright violation than distributing a JPEG of the same image embedded in some large binary).
But also, an unintentional copy of a copyrighted image is not a violation of copyright. (eg: an executable binary which happens to contain the bits corresponding to a picture of Batman -- but which are actually instruction sequences and were provably not intended to encode the picture -- clearly doesn't infringe.)
LLMs are somewhere in-between #1 and #2, and the intent can happen both in the training and also the prompting.
Stack on top of this the fact that the models can also definitely generate content that counts as fair use, or which isn't copyrighted.
It's the multitude of possible outputs, across the copyright spectrum, combined with the function of intent in training and/or prompting, which make this such a thorny legal issue for which existing copyright statute and jurisprudence is ill-suited.
Taking your Batman example: DC would come after you for trademark as well as copyright, and the copyright claims would be very carefully evaluated with respect to your very specific work. But here we are talking about a large model that can generate tons of different work which isn't subject to copyright or which is possibly fair use.
I don't think that existing jurisprudence (or even statute?!) can handle this situation very well, at all, without tons of arbitrary interpretative work on the parts of juries/judges, because of the multitude and vague intent issues described above.
(...Also presumably the merits of the DC case wouldn't matter because your victory would be pyhrric unless you are a mega-corp. Which from a legal theory perspective is neither here nor there but from a legal practicality perspective may inform how companies go about enforcing copyright claims on model weights/outputs.)
Anyways. I think we have a right mess on our hands and the legislature needs to do their damn jobs. Welcome to America, I guess :)
Honestly, your second to last sentence is literally the kind of thing I hate hearing most from non-lawyers; the whole "if the legislature were just smarter" thing is just a weird pie-in-the-sky concept that is more-or-less like saying "the world would be better if CEOs were less greedy."
Like, yes, but it's not very likely to happen and it's not a particularly horrible thing if it doesn't; the law is slow and little-c conservative and you're just expecting it to be something it MOST often just ain't.
Let's say you take the harry potter books and create a spreadsheet with each word in it as a column, and the number of times that word appears. Would that violate the copyright? I'd be interested in the rationale if someone thinks it would.
If your table was the number of times a word was followed by a chain of other words, that would be a closer comparison to AI weights. In that case it would be possible with reasonable accuracy to reconstruct passages from the harry potter books (see GitHub Copilot).
The copyright aspect makes more sense when you start thinking of AI training models as lossy compression for the original works. Is a downsampled copy of the new Star Wars movie still protected under copyright?
Just tabulating the word counts would not violate copyright as it is considered facts and figures.
It resembles lossy compression in some ways, but in other important ways I think it doesn’t?
Like, if one has access to such a model, and doesn’t count it towards the size cost of a compression/decompression program nor as part of the compressed size of the compressed images, then that should allow for compressing images to have substantially fewer bits than one would otherwise be able to achieve (at least, assuming that one doesn’t care about the amount of time used to compress/decompress. Idk if this is actually practical.)
But unlike say, a zip file, the model doesn’t give you a representation of like, a list of what images (or image/caption pairs) it was trained on.
Or like, in your analogy with the lower resolution of the movie, the lower resolution of it still tells you how long the movie is (though maybe not as precisely due to lower framerate, but that’s just going to be off by less than a second, unless you have an exceedingly low framerate, but that’s hardly a video at that point.)
There is a sense in which any model of some data yields a way to compress data-points from it, where better models generally give a smaller size. But, like, any (precisely stated) description counts as a model?
So, whether it is “like lossy compression” in a way that matters to copyright, I would think depends a lot on things like,
Well, for one thing, isn’t there some kind of “might someone consume the allegedly infringing work as a substitute for the original work, e.g. if cheaper?” test?
For a lower resolution version of Star Wars movie, people clearly would.
But if one wanted to view some particular artwork that is in the training set, I would think that one couldn’t really obtain such a direct substitute? (Well, without using the work as an input to the trained model, asking it to make a variation, but in that case one already has the work separate from the model, so that’s not really relevant.)
If I wanted to know what happened in minute 33 of the Star Wars movie, I could look at minute 33 of the compressed version.
what is a 'copy'? byte accurate, or 'something with general resemblance'? would a badly compressed "copy" image of a copyrighted material still be 'a copy' or would it be some other thing? would low quality image compression be enough to skirt around copyright claims? image formats and viewers just 'reproduce' an impression of original data from derive compressed data. it is also just 'information that's been generalized by some degree' - for space saving purposes and so on. so, what if image generators could be thought of as a 'very good multi-image compression algorithm' that can output multiple images as well, to a 'somewhat recognizable degree'.
Badly compressed still counts.
I think if the data allows you to reconstruct a recognizable recreation of the original work, you have a good chance of it being considered a derivative copy.
A mono audio version of Star Wars, compressed down to 320x240, filmed from the back of a theater on a VHS camera, converted to Video CD, would under any reasonable interpretation be just a copy of the original.
I assume it starts getting murky when there's some sort of transformation done it it. What if I run motion capture on it, and use that motion capture data to create a cartoon version of Star Paws (my puppies in space epic)?
What if I do a scene for scene recreation as the animated cartoon (removing any mentions to copyrighted names -- Luke Skywalker is now Duke Dogwalker, for example)? In this case, there's been no actual data transfer -- all the sprites are hand drawn, backgrounds etc.
What would be an interesting exercise would be to try and create a series of artifacts that each on their own are considered non-derivatives, but can be used together to reconstitute the original. For example, create a compression method that relies heavily on transforms / macroblocks, but strip out any of the actual pixel data from the film. That info might be supplied as palette files which are themselves not really copyrighted data, but together with the compressed transform stream can be used to recreate the original video.
This is a great example. Summarizing or paraphrasing copyrighted content, or simply using it as a seed to generate input-output pairs - this kind of data transformation prior to training could solve the issues with copyright. It cleanly separates form from content.
3.4~ as they were preparing early 4.0 releases was when I was experimenting with godot.
I should add that I was specifically trying to use c# ecosystem for godot at the time and their plugins for testing. Since I ended up with Unreal I'm refreshed enough in c++ that I might check Godot 4.0 out this weekend again.
I think the scenario where companies that own AI systems don't get benefits from employing people, so people are poor and can't afford anything, is paradoxical, and as such, it can't happen.
Let's assume the worst case: Some small percentage of people own AIs, and the others have no ownership at all of AI systems.
Now, given that human work has no value to those owning AIs, those humans not owning AIs won't have anything to trade in exchange for AI services. Trade between these two groups would eventually stop.
You'll have some sort of two-tier economy where the people owning AIs will self-produce (or trade between them) goods and services. However, nothing prevents the group of people without AIs from producing and trading goods and services between them without the use of AIs. The second group wouldn't be poorer than it is today; just the ones with AI systems will be much richer.
This worst-case scenario is also unlikely to happen or last long (the second group will eventually develop its own AIs or already have access to some AIs, like open models).
If models got exponentially better with time, then that could be a problem, because at some point, someone would control the smartest model (by a large factor) and could use it with malicious intent or maybe lose control of it.
But it seems to me that what I thought time ago would happen has actually started happening. In the long term, models won't improve exponentially with time, but sublinearly (due to physical constraints). In which case, the relative difference between them would reduce over time.