Hacker News new | past | comments | ask | show | jobs | submit login

I would guess those are the big 4. I am pretty sure they scraped most of the internet.

"Sentiment Analysis" & "Emotion Lexicons" are part of the training process for all major LLMs.




> I would guess those are the big 4.

What are your thoughts on the whole narrative of "Google was asleep at the wheel innovation wise" because obviously they had Twitter/Reddit/Wikipedia/StackOverflow indexed as well as anybody else on the planet (at least I think they did... maybe not individual tweets?)

If both OpenAI and Google "index" the same content, why is a random-word-generator (LLM) able to outperform Google (who can actually cite sources, etc.)




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: