The Google CoT is so incredibly dumb. I thought my models had been lobotomized until I realized they must be doing some sort of processing on the thing.
You are referring to the new (few days old-ish) CoT right? It’s bizzare as to why google did it, it was very helpful to see where the model was making assumptions or doing something wrong. Now half the time it feels better to just use flash with no thinking mode but ask it to manually “think”.
I had assumed it was a way to reduce "hallucinations". Instead of me having to double check every response and prompt it again to clear up the obvious mistakes it just does that in the background with itself for a bit.
Obviously the user still has to double check the response, but less often.