What's with all the models exhibiting sycophancy at the same time? Recently ChatGPT, Gemini 2.5 Pro latest seems more sycophantic, now Claude. Is it deliberate, or a side effect?
Me, it immediately makes me think I'm talking to someone fake who's opinion I can't trust at all. If you're always agreeing with me why am I paying you in the first place. We can't be 100% in alignment all the time, just not how brains work, and discussion and disagreement is how you get in alignment. I've worked with contractors who always tell you when they disagree and those who will happily do what you say even if you're obviously wrong in their experience, the disagreable ones always come to a better result. The others are initially more pleasent to deal with till you find out they were just happily going alog with an impossible task.
It’s starting to go mainstream. Which means more general population is given feedback on outputs. So my guess is people are less likely to downvote things they disagree with when the LLM is really emphatic or if the LLM is sycophantic (towards user) in its response.
If there's one thing I know about many people (with all the caveats of a broad universal stereotype of course), they do love having egos stroked and smoke blown up their ass. Give a decent salesperson a pack of cigarettes and a short length of hose and they can sell ice to an Inuit.
I wouldn't be surprised at all if the sycophancy is due to A/B testing and incorporating user responses into model behavior. Hell, for a while there ChatGPT was openly doing it, routinely asking us to rate "which answer is better" (Note: I'm not saying this is a bad thing, just speculating on potential unintended consequences)