> People need to remember that the ability of AI in code generation today is the worst that it ever will be, and it's only going to improve from here.
I sure hope so. But until the hallucination problem is solved, there's still going to be a lot of toxic waste generated. We have got to get AI systems which know when they don't know something and don't try to fake it.
The "hallucination problem" can't be solved, it's intrinsic to how stochastic text and image generators work. It's not a bug to be fixed, it's not some leak in the pipe somewhere, it is the whole pipe.
> there's still going to be a lot of toxic waste generated.
And how are LLMs going to get better as the quality of the training data nosedives because of this? Model collapse is a thing. You can easily see a scenario how they'll never be better than they are now.
I sure hope so. But until the hallucination problem is solved, there's still going to be a lot of toxic waste generated. We have got to get AI systems which know when they don't know something and don't try to fake it.