Hacker News new | past | comments | ask | show | jobs | submit login

This seems a result of relying on the LLM to accurately extract information that needs to be exact.

I touch on this from my own experience: https://youtu.be/cs5cbxDClbM?si=IQIFAD38cVzLCs55&t=486

Basically if you have the actual "factual" information, use it directly instead of hoping the LLM will accurately extract it and use it as part of a function call. In this case they already know what the accurate URLs are, just use it.




The LLM _might_ do what you want.

Where I currently work, our function calls regularly fail only to succeed flawlessly on a retry. (I believe we’re on the order of 10s of millions openai calls a day)

These are non-deterministic systems. I wouldn’t even trust them to accurately extract text until you did a beam search or something similar to kind of average out different LLM outputs




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: