I've found myself having brand loyalty to Claude. I don't really trust any of the other models with coding, the only one I even let close to my work is Claude. And this is after trying most of them. Looking forward to trying 4.
Much like others, this is my stack (or o1-pro instead of Gemini 2.5 Pro). This is a big reason why I use aider for large projects. It allows me to effortlessly combine architecture models and code writing models.
I know in Cursor and others I can just switch models between chats, but it doesn't feel intentional the way aider does. You chat in architecture mode, then execute in code mode.
I also use Aider (lately, always with 3.7-sonnet) and really enjoy it, but over the past couple of weeks, the /architect feature has been pretty weird. It previously would give me points (e.g. 1. First do this, 2. Then this) and, well, an architecture. Now it seems to start spitting out code like crazy, and sometimes it even makes commits. Or it thinks it has made commits, but hasn't. Have you experienced anything like this? What am I doing wrong?
The idea is that some models are better at reasoning about code, but others are better at actually creating the code changes (without syntax errors, etc). So Aider lets you pick two models - one does the architecting, and the other does the code change.
I have been very brand loyal to claude also but the new gemini model is amazing and I have been using it exclusively for all of my coding for the last week.
I am excited to try out this new model. I actually want to stay brand loyal to antropic because I like the people and the values they express.
Yah Claude tends to output 1200+ line architectural specification documents while Gemini tends to output ~600 line. (I just had to write 100+ architectural spec documents for 100+ different apps)
Not sure why Claude is more thorough and complete than the other models, but it's my go-to model for large projects.
The OpenAI model outputs are always the smallest - 500 lines or so. Not very good at larger projects, but perfectly fine for small fixes.
I'd interested to hear more about your workflow. I use Gemini for discussing the codebase, making ADR entries based on discussion, ticket generation, documenting the code like module descriptions and use cases+examples, and coming up with detailed plans for implementation that cursor with sonnet can implement. Do you have any particular formats, guidelines or prompts? I don't love my workflow. I try to keep everything in notion but it's becoming a pain. I'm pretty new to documentation and proper planning, but I feel like it's more important now to get the best use out of the llms. Any tips appreciated!
For a large project, the real human trick for you to do is to figure out how to partition it down to separate apps, so that individual LLMs can work on them separately, as if they were their own employees in separate departments.
You then ask LLMs to first write features for the individual apps (in Markdown), giving it some early competitive guidelines.
You then tell LLMs to read that features document, and then write an architectural specification document. Tell it to maybe add example data structures, algorithms, or user interface layouts. All in Markdown.
You then feed these two documents to individual LLMs to write the rest of the code, usually starting with the data models first, then the algorithms, then the user interface.
Again, the trick is to partition your project to individual apps. Also an app isn't the full app. It might just be a data schema, a single GUI window, a marketing plan, etc.
The other hard part is to integrate the apps back together at the top level if they interact with each other...
Awesome, thanks! It's interesting how the most effective LLM use for coding kind of enforces good design principles. It feels like good architects/designers are going to be more important than ever.
Edit- Except maybe TDD? Which kind of makes me wonder if TDD was a good paradigm to begin with. I'm not sure, but I'm picturing an LLM writing pretty shitty/hacky code if its goal is just passing tests. But I've never really tried TDD either before or after LLM so I should probably shut up.
Same. And I JUST tried their GitHub Action agentic thing yesterday (wrote about it here[0]), and it honestly didn't perform very well. I should try it again with Claude 4 and see if there are any differences. Should be an easy test
Gemini 2.5 Pro replaced Claude 3.7 for me after using nothing but claude for a very long time. It's really fast, and really accurate. I can't wait to try Claude 4, it's always been the most "human" model in my opinion.
Something I’ve found true of Claude, but not other models, is that when the benchmarks are better, the real world performance is better. This makes me trust them a lot more and keeps me coming back.
It’s possible to get to know the quirks of these models and intuit what will and won’t work, and how to overcome those limitations. It’s also possible to just get to know, like, and trust their voice. I’m certain that brand awareness is also a factor for me in preferring Claude over ChatGPT etc
I think it really depends on how you use it. Are you using an agent with it, or the chat directly?
I've been pretty disappointed with Cursor and all the supported models. Sometimes it can be pretty good and convenient, because it's right there in the editor, but it can also get stuck on very dumb stuff and re-trying the same strategies over and over again
I've had really good experiences with o4-high-mini directly on the chat. It's annoying going back and forth copying/pasting code between editor and the browser, but it also keeps me more in control about the actions and the context
Would really like to know more about your experience
I also use DeepSeek R1 as a daily driver. Combined with Qwen3 when I need better tool usage.
Now that both Google and Claude are out, I expect to see DeepSeek R2 released very soon. It would be funny to watch an actual open source model getting close to the commercial competition.
R1 takes more time to answer, but I don't remember a single case where I actually compared answers where R1 was worse than pure V3.
And I don't even have to wait that long. If I watch the thinking, I can spot quickly it misunderstood me and rephrase the question without even waiting for the full response.
A nice thing about Deepseek is that it is so cheap to run. It's nice being able to explore conversational trees without getting a $12 bill at the end of it.