Hacker Newsnew | past | comments | ask | show | jobs | submit | noodletheworld's commentslogin

If claude code starts having ads for bun in the code it generates, I am never using it again.

To some degree have “opinionated views on tech stacks” is unavoidable in LLMs, but this seems like it moves us towards a horrible future.

Imagine if claude (or gemini) let you as a business pay to “prefer” certain tech in generated code?

Its google ads all over again.

The thing is, if they own bun, and they want people to use bun, how can they justify not preferencing it on the server side?

…and once one team does it… game on!

It just seems like a sucky future, that is now going to be unavoidable.


> And most importantly, you modify the work just like the creators modify the work

Emphasis mine.

Weights are not open source.

You can define terms to mean whatever you want, but fundametally if you cannot modify the “output” the way the original creators could, its not in the spirit of open source.

Isnt that literally what you said?

How can you possibly claim both that a) you can modify it the creators did, b) thats all you need to be open source, but…

Also c) the categorically incorrect assertion that the weights allow you to do this?

Whatever, I guess, but your argument is logically wrong, and philosophically flawed.


> Weights are not open source.

If they are released under an open source license, they are.

I think you are confusing two concepts. One is the technical ability to modify weights. And that's what the license grants you. The right to modify. The second is the "know-how" on how to modify the weights. That is not something that a license has ever granted you.

Let me put it this way:

```python

THRESHOLD = 0.73214

if input() < THRESHOLD:

  print ("low")
else:

  print ("high")
```

If I release that piece of code under Apache 2.0, you have the right to study it, modify it and release it as you see fit. But you can not have the right (at least the license doesn't deal with that) to know how I reached that threshold value. And me not telling you does not in any way invalidate the license being Apache 2.0. That's simply not something that licenses do.

In LLMs the source is a collection of architecture (when and how to apply the "ifs"), inference code (how to optimise the computation of the "ifs") and hardcoded values (weights). You are being granted a license to run, study, modify and release those hardcoded values. You do not, never had, never will in the scope of a license, get the right to know how those hardcoded values were reached. The process by which those values were found can be anything from "dreamt up" to "found via ML". The fact that you don't know how those values were derived does not in any way preclude you from exercising the rights under the license.


You are fundamentally conflating releasing a binary under an open source license with the software being open source. Nobody is saying that they're violating the license of Apache2 by not releasing the training data. What people are objecting to is that calling this release "open source", when the only thing covered by the open source license is the weights, to be an abuse of the meaning of "Open Source".

To give you an example: I can release a binary (without sources) under the MIT - an open source license. That will give you the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of said binary. In doing so, I would have released the binary under an open source license. However, most people would agree that the software would not be open source under the conventional definition, as the sources would not be published. While people could modify it by disassembling it and modifying it, there is a general understanding that Open Source requires distributing the _sources_.

This is very similar to what is being done here. They're releasing the weights under an open source license - but the overall software is not open source.


Skill declines over time, without practice.

If you speak fluent japanese, and you dont practice, you will remember being fluent but no longer actually be able to speak fluently.

Its true for many things; writing code is not like riding a bike.

You cant not write code for a year and then come back at the same skill level.

Using an agent is not writing code; but using an agent effectively requires that you have the skill of writing code.

So, after using a tool that automatically writes code for you, that you probably give some superficial review to, you will find, over time, that you are worse at coding.

You can sigh and shake your head and stamp your feet and disagree, but its flat out a fact of life:

If you dont practice, you lose skill.

I, personally found, this happening, so I now do 50/50 time: 1 week with AI, 1 week with strictly no AI.

If the no AI week “feels hard” then I extend it for another week, to make sure I retain the skills I feel I should have.

Anecdotally, here at $corp, I see people struggling because they are offloading the “make an initial plan to do x that I can review” step too much, and losing the ability to plan software effectively.

Dont be that guy.

If you offload all your responsibilities to an agent and sit playing with your phone, you are making yourself entirely replacable.


Hmmm… firebase clones are many and varied.

Whats special about this one?

Being a single file binary doesnt impress me; thats true of many projects in many langauges.

It seems nice you can use it as a go framework if you happen to use go, but Im not really compelled by the “it doesn't scale at all” aspects of it.

Someone whos used some other similar stuff comment on why this over any of the others, eg. self hosted superbase?


Have you ever seriously looked into self-hosting Supabase?

One binary to manage one sqlite file is indeed quite a selling point in comparison to this: https://github.com/supabase/supabase/blob/master/docker/dock...

Not saying Supabase is bad at all at what it does and I am very glad that it exists as an open source project, but they don't target the same type of project complexity at all.


Self-hosted Supabase is pretty good. I don't think anyone argues with that. It didn't used to be as smooth and it's certainly hungrier with many more moving parts.

Could you elaborate a bit more on your scaling concerns? You can certainly have lock-congestion with SQLite. That, said Postgres - while awesome - isn't the most horizontally scalable beast.


Hmmm… firebase clones are many and varied.

Can you recommend any in particular, were I wanting to migrate a project from firebase?


The most well-known ones are probably: Supabase, AppWrite and PocketBase.

Oh come on.

Micros$$$$ft owns github.

We don't need to give some pretend sympathy.

When you can afford to have good things, and you're not, don't come crying about getting called bad names.

Actions is bad.

> I dare anyone who is delusional enough to think they can create something better to actually make something better

Actions speak louder than words.

Zig is leaving because of the issues they mentioned.

> People tried other services like GitLab and realized it is slower, uglier and overall worse than GH and came crawling back.

Maybe. I guess we'll see.

I think the OP has been pretty clear that they're not happy with it, and, they're putting their money where their mouth is.

Clearly, just complaining about broken things isn't working.

Maybe a couple more big moves like this is what GH needs to wake up and allocate some more resources (that they, can categorically afford) to fixing things.


So who is complaining that Zig leaving GH is somehow a problem? I just don't like how they have to put out false claims like there are big problems with GH CI and Sponsors.

Zig is leaving GH for another provider. They did not make a better GH and fixed all the problems with it.

You literally have to fill out a form to convince Codeberg that you need CI. I would take GH CI over that.


> I just don't like how they have to put out false claims like there are big problems with GH CI and Sponsors

These aren't false claims.

Thats my point.

Microsoft can afford to make these tools better; they just dont care.

Yes, its better than having nothing, but honestly you have to be wearing blinkers not to see the decline rn.


> Microsoft can afford to make these tools better; they just dont care.

They certainly have enough money, but can they actually improve it? Who could step in? How? Do you think more hiring would help? Or would it make it worse?

Leadership could try and step in by force. But they'd have to undermine whoever is running github actions to do so. It would be a huge, risky political move. And for what? Are they actually losing customers over gh actions? I doubt it. I'm just not sure anyone cares to spend that much political capital to "fix" something that is arguably not that broken.

Big companies also simply can't fix stuff that's broken internally. Its not a money thing. Its a culture & politics thing. Its a leadership thing.

For example, does anyone remember Google Code? It was a github-like code hosting service that predated github by many years. Compared to github, it was terrible. When github came out, google could have rewritten Code from the ground up to copy github's better design and better choices. (Kind of like android did with ios). But they didn't. Github kicked their butt for many years. But nothing happened. They couldn't fix it. Now google code is basically dead.

Or why didn't Adobe build a viable figma competitor? Why didn't microsoft make a successful iphone or ipad competitor? Why didn't intel win the contract to make the iphone CPU? These aren't money problems. Its something else.

I've only heard stories of a couple leaders who had the force of personality to fix problems like this. Like Steve Jobs. And Elon Musk pulls some wild stunts too. Frankly, I don't think I'd like to work under either of them.


Github has been entirely integrated into Microsoft's AI division since the last Github CEO left a couple of months ago (not much of a loss since he was responsible for Githubs AI enshittifaction). Those org-changes alone are plenty of reason to lose trust in Github's future. IMHO things can only get worse with an "AI first" division in charge and now is probably the best time to jump ship, at least it's the responsible thing to do for sufficiently large and professional projects (and I bet that ziglang is only one of many to follow).

> But they'd have to undermine whoever is running github actions

I'm not sure if anybody is running the GH Actions project at the moment beyond bare maintenance work and trying to keep the whole thing from collapsing into a pile of rubble. There is also no separate Github entity anymore within Microsoft, so nothing to "undermine".


> Are they actually losing customers over gh actions? I doubt it.

Did you read the article?


Correct me if I'm wrong, but I doubt Zig was ever a paying customer of github.

Damn, I guess if Zig really wanted to spite Github they should have stayed and continued being a drain in Microsoft's resources.

> I never really understood why you have to stuff all the tools in the context.

You probably don't for... like, trivial cases?

...but, tool use is the most fine grained point, usually, in an agent's step-by-step implementation plan; So when planning, if you don't know what tool definitions exist, an agent might end up solving a problem naively step-by-step using primitive operations, when a single tool already exists that does that, or does part of it.

Like, it's not quite as simple as "Hey, do X"

It's more like: "Hey, make a plan to do X. When you're planning, first fetch a big list of the tools that seem vaguely related to the task and make a step-by-step plan keeping in mind the tools available to you"

...and then, for each step in the plan, you can do a tool search to find the best tool for x, then invoke it.

Without a top level context of the tools, or tool categories, I think you'll end up in some dead-ends with agents trying to use very low level tools to do high level tasks and just spinning.

The higher level your tool definitions are, the worse the problem is.

I've found this is the case even now with MCP, where sometimes you have to explicitly tell an agent to use particular tools, not to try to re-invent stuff or use bash commands.


An, the dream, a cross platform App Store you can install apps into any client application that supports MCP, but is open, free and agentic.

It’s basically a “web App Store” and we side step the existing app stores (and their content guidelines, security restrictions and billing requirements) because it’s all done via a mega app (the MCP client).

How could it go wrong?

If only someone had done this before, we wouldnt be stuck in Apples, etc’s walled gardens…

Seriously though; honest question: this is literally circumventing platform requirements to use the platform app stores. How do you imagine this is going to be allowed?

Is ChatGPT really big enough they can pull the “we’re gonna do it, watcha gonna do?” to Apple?

Who’s going to curate this app store so non technical users (the explicitly stated audience) can discover these MCP apps?

It feels like MCP itself; half baked. Overly ambitious. “We’ll figure the details out later”


The apps are LLM agnostic, so all MCP apps will be portable. Economically, this means developers don’t have convince users to pay $20 a month, these users are already paying that. Devs just have to convince users to buy the app on the platform.

I don’t see this being the future state. We’d be talking about a world where any and all apps exist inside of fucking ChatGPT and that just sounds ridiculous.


Hm.

Its an easy trap to fall into to say that people are in hard situations because They Arent Trying Hard Enough.

Your manager might think so.

Your company probably thinks youre not trying hard enough.

…but, there is a also reality, which is overloading people with impossible expectations and then watching them fail isnt helpful.

Its not a learning experience.

Its just mean, and selfish… even when those expectations are, perhaps, self imposed.

If youre in one of these situations, you should ask for help.

If you see someone in them, you should offer to help.

Its well documented that gifted children struggle as adults because they struggle under the weigh of expectations.

The soltuion to this is extremely rarey self reflection about not trying hard enough.

Geez. Talk about setting people up for failure.

The OP literally succeeded by asking for help, yet somehow, walked away with no appreciation of it.


This was sort of my takeaway too. The OP got help from someone else and thought to herself “if only I’d tried harder I could’ve done this on my own”. That doesn’t seem like a healthy takeaway.


I didn’t take it that way at all. I took it as “I was blinded from the actual solution because my vision was artificially narrow due to my past experiences with this person.” They didn’t ask for help, their partner intervened for them with a completely different and more direct approach.

I have a kid going thru this right now. It’s very disheartening and frustrating to see, because even with coaching and help, they don’t see the help and suggestions as solutions because they simply can’t see it. And as a parent you don’t want to have to intervene, you want them to learn how to dig their way out of it. But it’s tough to get them to dig when they don’t believe in shovels.


I guess I really don’t like this message because I am a disabled person. In the exercise that she describes where an instructor tells people to stand up from a position that they think they can’t stand up from, what if I actually can’t stand up? It might lead me to believe that perhaps I’m simply not trying enough.

You might think this contrived, but when people tell you over and over that you’re not trying hard enough because of things you can’t control, you internalize it.

To me — someone who has to ask for help — it seems like that she didn’t really notice that help was the thing that helped.


What if the cops, the friend, and the consulate all said, "we do not care about a random mentally ill stranger, on a different continent, sending threats. You said he's been doing this for years and has done nothing yet? Sounds like you're safe. We have real crimes to solve. We have real murders to figure out. Call back if he shows up at your house, but he most certainly never will." Or maybe the FBI is like "oh, okay. Thanks. We'll keep an eye out but now this guy's part of an investigation so we can't talk about him to you." and then they do nothing, the friend doesn't reply, and the consulate is like "we're not obligated to reply." Those seem like super likely conclusions to the husband helping, too. So then would that have no longer been the "actual solution?" It seems that the "actual solution" is only determined after the fact once there is a success, and that's used as a proxy for whether or not the actions were really trying. If she had never replied and then the guy stopped texting after a year, would that have also been Actually Trying? Maybe it would've, because one could come up with a post-hoc explanation as to why that was an Actual Try. It feels sloppy to not distinguish what makes something a form of an Actual Try vs a successful try, because Actually Trying should be able to count failures as part of sincere attempts. Otherwise, Actually Trying collapses into being a synonym for success.


Mmm.

Youre doing two things:

1) youre moving state into an arbitrary untrusted easy to modify location.

2) youre allowing users to “deep link” into a page that is deep inside some funnel that may or may not be valid, or even exist at some future point in time, forget skipping the messages/whatever further up.

You probably dont want to do either of those two things.


Claude is just better at coding than cursor.

Really, the interface isn't a meaningful part of it. I also like cmd-L, but claude just does better at writing code.

...also, it's nice that Anthropic is just focusing on making cool stuff (like skills), while the folk from cursor are... I dunno. Whatever it is they're doing with cursor 2.0 :shrug:


Cursor can use the Claude Sonnet and Claude Opus LLMs, so I would expect output to be quite similar in that respect.

The agentic part of the equation is improving on both sides all the time.


There's something in the prompting, tooling, heuristics inside the Claude Code CLI itself that makes it more than just the model it's talking to and that becomes clear if you point your ANTROPHIC_URL at another model. The results are often almost equivalent.

Whereas I tried Kilo Code and CoPilot and JetBrain's agent and others direct against Sonnet 4 and the output was ... not good ... in comparison.

I have my criticisms of Claude but still find it very impressive.


Claude Code is much more efficient even compared to Cursor using the Anthropic models. The planning and tool use is much better.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: