Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I have a question. The video demos for this all mention that the o1 model is taking it's time to think through the problem before answering. How does this functionally differ from - say - GPT-4 running it's algorithm, waiting five seconds and then revealing the output? That part is not clear to me.


It is recursively "talking" to itself to plan and then refine the answer.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: