Hacker News new | past | comments | ask | show | jobs | submit login

My guess is that they did RLVR post-training for SWE tasks, and a smaller model can undergo more RL steps for the same amount of computation.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: