Oh contraire, I ask questions about recent things all the time, because the LLM will do a web search and read the web page - multiple pages - for me, and summarize it all.
4o will always do a web search for a pointedly current question, give references in the reply that can be checked, and if it didn't, you can tell it to search.
o3 meanwhile will do many searches and look at the thing from multiple angles.
It seems like it shifts it from "using an LLM instead of a search engine is cheaper" to "using an LLM to query the search engine represents only a marginal increase in cost", no?
But that's from user perspective, check Google or openai pricing if you wanted to have grounded results in their API. Google ask $45 for 1k grounded searches on top of tokens. If you have business model based on ads you unlikely gonna have $45 CPM. Same if you want to offer so free version of you product then it's getting expensive.
4o will always do a web search for a pointedly current question, give references in the reply that can be checked, and if it didn't, you can tell it to search.
o3 meanwhile will do many searches and look at the thing from multiple angles.