It may be the way I use it, but qwen3-coder (30b with ollama) is actually helping me with real world tasks. Its a bit worse than big models for the way I use it, but absolutely useful. I do use ai tools with very specific instructions though, like file paths, line numbers if I can, and specific direction about what to do, my own tools, etc. so that may be why I don't see such a huge difference from big models.
It has everything to do with the way you use it. And the biggest difference is how fast the model/service can process context. Everything is context. It's the difference between you iterating on an LLM boosted goal for an hour vs 5 minutes. If your workflow involves chatting with an LLM and manually passing chunks, and manually retrieving that response, and manually inserting it, and manually testing....
You get the picture. Sure, even last year's local LLM will do well in capable hands in that scenario.
Now try pushing over 100,000 tokens in a single call, every call, in an automated process. I'm talking the type of workflows where you push over a million tokens in a few minutes, over several steps.
That's where the moat, no, the chasm, between local setups and a public API lies.
No one who does serious work "chats" with an LLM. They trigger workflows where "agents" chew on a complex problem for several minutes.
You'll see good results, Kimi is basically a micro dosing Sonnet lol. V v v reliable tool calls, but, because it's micro dosing, you don't wanna use it for implementing OAuth, maybe adding comments or strict direction (i.e. a series of text mutations)
this is not coder
this help typing instructions. Coding is different. For example look at my repository and tell me how refactorizing it, write a new function etc.
In my opinion You must change name.
Trees are barely a firm category of plant at all. It's basically just tall plants with woody stems. Plants can gain and lose woody stems without too much trouble (relatively speaking, over evolutionary time). So any time a plant species currently growing soft stems can benefit from being really tall, they have a good chance of evolving into "trees".
As an aside there: the blog post briefly talks about birds. It turns out that membrane wings are much easier to evolve than feathered wings. There have been lots of membrane winged creatures (including "birds" with membrane wings in the Jurassic) but not nearly as many appearances of feathered wings.
Is there a model which can generate vocals for an existing song given lyrics and some direction? I can't sing my way out of a paper bag, but I can make everything else for a song, so it would be a good way to try a bunch of ideas and then involve an actual singer for any promising ideas.
That would be a great thing to have, but I can't imagine how that can be maintained. Managing versions of gdal+gdal-sys+geo+ndarray+ndarray-linalg has been a giant PITA recently so I for one would welcome this feature.
> No matter how deep your knowledge is, you're only scratching the surface.
I understand this is just emphasis, but no, its not magic, its not innate ability, its just software man! If you have dug deep enough, and understood it, that's it. Key phrase is IMO 'understood', but that's universal.
I think the point is that it may be impossible for a single human to have "nearly complete understanding" of how the networking stack work. But maybe what was meant was nearly complete understanding of the fundamentals. That's certainly achievable. But networking in the kernel is a beast of a thing with specialists in small parts of it. But I don't think there's a single human that know nearly all that those specialists know combined.
I really wish there was a way to tell vscode to understand inline metadata.