Nah. I don’t “feel the AGI”. I think the AGI is a silly quest, just like having a plane flap its wings. Feynman had it right in the 80s: https://www.youtube.com/watch?v=ipRvjS7q1DI
I think the future is lots of incremental improvements that get replicated everywhere and humans outclassed in nearly every field, where they stop relying on each other.
As far as LLMs yes I think they are the best placed to know if some code or invention is novel, because of their vast training. Can be far better than a patent examiner, if trained on prior art, for instance.
What you’re not used to is an LLM being fed stuff that you statistically / heuristically would expect to be average but is in fact the polished result of years of work. The LLM freaks out, you get surprised. You think it was the prompts. The prompts are changed, the END result is the same (scroll to the bottom).
I want to see whether foundational LLMs can be used as a good first filter for dealflow and evaluating actual projects.
The problem of using an LLM to validate reality is that you still need to prove your genius code work in the real world. ChatGPT won't hire you, it even have your code already.
I think the future is lots of incremental improvements that get replicated everywhere and humans outclassed in nearly every field, where they stop relying on each other.
As far as LLMs yes I think they are the best placed to know if some code or invention is novel, because of their vast training. Can be far better than a patent examiner, if trained on prior art, for instance.
What you’re not used to is an LLM being fed stuff that you statistically / heuristically would expect to be average but is in fact the polished result of years of work. The LLM freaks out, you get surprised. You think it was the prompts. The prompts are changed, the END result is the same (scroll to the bottom).
I want to see whether foundational LLMs can be used as a good first filter for dealflow and evaluating actual projects.