Hacker Newsnew | past | comments | ask | show | jobs | submit | n_kr's commentslogin

Hey sorry for the late reply. Yes I use PEP 723 already, but there is also a requirements.txt so that vscode can be happy and do typechecks etc.

I really wish there was a way to tell vscode to understand inline metadata.


It may be the way I use it, but qwen3-coder (30b with ollama) is actually helping me with real world tasks. Its a bit worse than big models for the way I use it, but absolutely useful. I do use ai tools with very specific instructions though, like file paths, line numbers if I can, and specific direction about what to do, my own tools, etc. so that may be why I don't see such a huge difference from big models.

I should try Kimi K2 too.


It has everything to do with the way you use it. And the biggest difference is how fast the model/service can process context. Everything is context. It's the difference between you iterating on an LLM boosted goal for an hour vs 5 minutes. If your workflow involves chatting with an LLM and manually passing chunks, and manually retrieving that response, and manually inserting it, and manually testing....

You get the picture. Sure, even last year's local LLM will do well in capable hands in that scenario.

Now try pushing over 100,000 tokens in a single call, every call, in an automated process. I'm talking the type of workflows where you push over a million tokens in a few minutes, over several steps.

That's where the moat, no, the chasm, between local setups and a public API lies.

No one who does serious work "chats" with an LLM. They trigger workflows where "agents" chew on a complex problem for several minutes.

That's where local models fold.


You'll see good results, Kimi is basically a micro dosing Sonnet lol. V v v reliable tool calls, but, because it's micro dosing, you don't wanna use it for implementing OAuth, maybe adding comments or strict direction (i.e. a series of text mutations)


I have a 1yo too, and I could do it. I used the other tools to make one which I liked.


Yes I do that too. The important bit is the model. Rest is almost trivial. I had posted a Show HN here about the script I've been using which is open source now ( https://github.com/n-k/tinycoder ) ( https://news.ycombinator.com/item?id=44674856 ).

With a bit of help from ChatGPT etc., it was trivial to make, and I use it everyday now. I may add DDG and github search to it soon too.


this is not coder this help typing instructions. Coding is different. For example look at my repository and tell me how refactorizing it, write a new function etc. In my opinion You must change name.


Could you plz link to your repo? would love to look at it.

> this is not coder this help typing instructions.

'Coding' technically is just that. If you mean engineering, yes sure, I agree. Nothing right now automates engineering. Maybe some day.


You made my day with this link


> Trees, for example, have separately evolved at least 100 times.

Can you explain more? Sounds interesting


Trees are barely a firm category of plant at all. It's basically just tall plants with woody stems. Plants can gain and lose woody stems without too much trouble (relatively speaking, over evolutionary time). So any time a plant species currently growing soft stems can benefit from being really tall, they have a good chance of evolving into "trees".


I’ve seen rather large cactus turn the base of their stems woody and bark clad.



Thank you for link.

As an aside there: the blog post briefly talks about birds. It turns out that membrane wings are much easier to evolve than feathered wings. There have been lots of membrane winged creatures (including "birds" with membrane wings in the Jurassic) but not nearly as many appearances of feathered wings.

https://www.youtube.com/watch?v=HxA38gH8Gj4


One example is oak trees being more closely related to tulips than to pine trees.

(Tulips and oak trees are both angiosperms, flowering plants, and share a common angiosperm ancestor. Pine trees on the other hand are gymnosperms.)


Is there a model which can generate vocals for an existing song given lyrics and some direction? I can't sing my way out of a paper bag, but I can make everything else for a song, so it would be a good way to try a bunch of ideas and then involve an actual singer for any promising ideas.


Heh, this will be a fun series.

I noticed that you are not using Letsencrypt operator and CRDs :-|


Not yet...


That would be a great thing to have, but I can't imagine how that can be maintained. Managing versions of gdal+gdal-sys+geo+ndarray+ndarray-linalg has been a giant PITA recently so I for one would welcome this feature.


> No matter how deep your knowledge is, you're only scratching the surface.

I understand this is just emphasis, but no, its not magic, its not innate ability, its just software man! If you have dug deep enough, and understood it, that's it. Key phrase is IMO 'understood', but that's universal.


I think the point is that it may be impossible for a single human to have "nearly complete understanding" of how the networking stack work. But maybe what was meant was nearly complete understanding of the fundamentals. That's certainly achievable. But networking in the kernel is a beast of a thing with specialists in small parts of it. But I don't think there's a single human that know nearly all that those specialists know combined.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: