IMVHO: those who cry "AI will destroy anything" AND those who equally cry "AI will makes anything better" are both largely wrong for the present and still wrong for the mean and long run.
Today "AI" systems are nice automatic "summarize tools", with big limits and issues. They might be useful in limited scenarios like to design fictional stuff, to be "search engine on steroid" (as long as their answers are true) and so on. Essentially they might help automation a bit.
The BIG, BIG, BIG issue is who train them AND how can we verify. If people start to get the habit of taking for truth any "answer" their grasp on the reality would be even lower that today "online misinformation", and that can go far beyond news (try to imaging false medical imaging analysis consequences). How can we verify is even more complex. Not only we can't train at home with interesting results but also we can't verify for truth the mass of training materials. Try to imaging the classic Eduard Bernays "dummy" sci journal publishing some true papers and some false one stating smoking is good for health... https://www.apa.org/monitor/2009/12/consumer now imaging the effect of carefully slipped false material in the big "ocean" of data...
They are trained using input from an army of underpaid "ghost workers", i.e. people without many rights or economic freedom, and no consideration for their well-being.
Slave labor produce poor quality results, of course, but per se have no specific bias, most western prosperity was made exploiting peoples for centuries so well... Any morality aside, I'm much more concerned by those who decide how to train and on what data than about the ghost slave labor they exploit...
I'm concerned which data these real human beings were forced to train it on. The images they had to review were highly disturbing and illegal in the US.
Today "AI" systems are nice automatic "summarize tools", with big limits and issues. They might be useful in limited scenarios like to design fictional stuff, to be "search engine on steroid" (as long as their answers are true) and so on. Essentially they might help automation a bit.
The BIG, BIG, BIG issue is who train them AND how can we verify. If people start to get the habit of taking for truth any "answer" their grasp on the reality would be even lower that today "online misinformation", and that can go far beyond news (try to imaging false medical imaging analysis consequences). How can we verify is even more complex. Not only we can't train at home with interesting results but also we can't verify for truth the mass of training materials. Try to imaging the classic Eduard Bernays "dummy" sci journal publishing some true papers and some false one stating smoking is good for health... https://www.apa.org/monitor/2009/12/consumer now imaging the effect of carefully slipped false material in the big "ocean" of data...