Hacker Newsnew | past | comments | ask | show | jobs | submit | anoncareer0212's commentslogin

Right, and Pets.com isn't OpenAI. It's frustrated trolling dressed up in "greybeard" clothes.


This is all about OpenAI, not about AI being subsidized...with some sort of directive to copy/paste "OpenAI" for all the big AI providers? (presumably you meant s/OpenAI/$PROVIDER?)

If that's what you meant: Google. Boom.

Also, perhaps you're a bit new to industry, but that's how these things go. They burn a lot of capital building it out b/c they can always fire everyone and just serve at cost -- i.e. subsidizing business development is different from subsiziding inference, unless you're just sort of confused and angry at the whole situation and it all collapses into everyone's losing money and no one will admit it.


You're replying to a story about a hyperscaler worrying investors about how much they're leveraging themselves for a small number of companies.

From the article: > OpenAI faces questions about how it plans to meet its commitments to spend $1.4tn on AI infrastructure over the next eight years.

Someone needs to pay for that 1.4 trillion, that's 2/3 of what Microsoft makes this year. If you think they'll make that from revenue, that's fine. I don't. And that's just the infra.


Source on it being subsidized? :) (there isn't one, other than an aggro subset of people lying to eachother that somehow literally everyone is losing money, while posting record profit margins) (https://en.wikipedia.org/wiki/Hitchens%27s_razor)


If it's not profitable, it's running on capital. Subsidized.


Counterpoint: I Google'd "jmap gmail" and a top result is a comment from HN in 2019 saying Gmail will never implement JMAP (it has not)

That's a really cruel response, because this is important work. I don't want my kids beholden to bigco.

I think it's real & important.

I also wanna make sure people like me, who have to keep tabs on the intersection of "how can I help liberate from BigCo" and "how can I make a livable wage doing so"

It is, quite literally, real, but also something you shouldn't waste time on if you're already busy. (c.f. https://jmap.io/software.html)


What does actually reason mean? It's doing this complex anesthesiologist x crna x resident surgery scheduling thingy for ~60 surgeries a day for this one client. Looked a lot like LSAT logic games stuff scaled up to me, took me almost 20-30m to hand check. Is that reasoning?


> ...they were the only ones at the table with OpenAI many months before the release of GPT-OSS

In the spirit of TFA:

This isn't true, at all. I don't know where the idea comes from.

You've been repeating this claim frequently. You were corrected on this 2 hours ago. llama.cpp had early access to it just as well.

It's bizarre for several reasons:

1. It is a fantasy that engineering involves seats at tables and bands of brothers growing from a hobby to a ???, one I find appealing and romantic. But, fantasy nonetheless. Additionally, no one mentioned or implied anything about it being a hobby or unserious.

2. Even if it wasn't a fantasy, it's definitely not what happened here. That's what TFA is about, ffs.

No heroics, they got the ultimate embarrassing thing that can happen to a project piggybacking on FOSS: ollama can't work with the materials OpenAI put out to help ollama users because llama.cpp and ollama had separate day 1 landings of code, and ollama has 0 path to forking literally the entire community to use their format. They were working so loosely with OpenAI that OpenAI assumed they were being sane and weren't attempting to use it as an excuse to force a community fork of GGUF and no one realized until after it shipped.

3. I've seen multiple comments from you this afternoon spiking out odd narratives about Ollama and llama.cpp, that don't make sense at their face from the perspective of someone who also deps on llama.cpp. AFAICT you understood the GGML fork as some halcyon moment of freedom / not-hobbiness for a project you root for. That's fine. Unfortunately, reality is intruding, hence TFA. Given you're aware, it makes your humbleness re: knowing whats going on here sound very fake, especially when it precedes another rush of false claims.

4. I think at some point you owe it to even yourself, if not the community, to take a step back and slow down on the misleading claims. I'm seeing more of a gish-gallop than an attempt to recalibrate your technical understanding.

It's been almost 2 hours since you claimed you were sure there were multiple huge breakages due to bad code quality in llama.cpp, and here, we see you reframe that claim as a much weaker one someone else made to you vaguely.

Maybe a good first step to avoiding information pollution here would be to invest time spent repeating other peoples technical claims you didn't understand, and find some of those breakages you know for sure happened, as promised previously.

In general, I sense a passionate but youthful spirit, not an astro-turfer, and this isn't a group of professionals being disrespected because people still think they're a hobby project. Again, that's what the article is about.


Wow, I wasn't expecting this. These are fair critiques, as I am only somewhat informed about what is clearly a very sensitive topic.

For transparency, I attended ICML2025 where Ollama had set up a booth and had a casual conversation with the representatives there (one of whom turned out to lead the Ollama project) before they went to their 2nd birthday celebration. I'm repeating what I can remember from the conversation, about ten minutes or so. I am a researcher not affiliated with the development of llama.cpp or Ollama.

> for a project you root for

I don't use Ollama, and I certainly don't root for it. I'm a little disappointed that people would assume this. I also don't use llama.cpp and it seems that is the problem. I'm not really interested in the drama, I just want to understand what these projects want to do. I work in theory and try to stay up to date on how the general public can run LLMs locally.

> no one realized until after it shipped.

I'm not sensing that the devs at Ollama are particularly competent, especially when compared to the behemoths at llama.cpp. To me, this helps explain why their actions differ from their claimed motivation, but this is probably because I prefer to assume incompetence over something sinister.

> as promised previously...

I don't think I made any such promises. I can't cite those claimed breakages, because I do not remember further details from the conversation needed to find them. The guy made a strong point to claim they had happened and there was enough frustrated rambling there to believe him. If I had more, I would have cited them. I remember seeing news regarding the deprecation of multimodal support hence the "I could swear that" comment (although I regret this language, and wish I could edit the comment to tone it down a bit), but I do not think this was what the rep cited. I had hoped that someone could fill in the blanks there, but if knowledgeable folks claim this is impossible (which is hard to believe for a project of this size, but I digress), I defer to their expert opinion here.

> llama.cpp had early access to it just as well.

I knew this from the conversation, but was told Ollama had even earlier discussions with OpenAI as the initial point of contact. Again, this is what I was told, so feel free to critique it. At that time, the rep could not explicitly disclose that it was OpenAI, but it was pretty obvious from the timing due to the delay.

> avoiding information pollution

I'm a big believer in free speech and that the truth will always come out eventually.

> I'm seeing more of a gish-gallop than an attempt to recalibrate your technical understanding...

> I sense a passionate but youthful spirit, not an astro-turfer,

This is pretty humbling, and frankly comes off a little patronizing, but I suppose this is what happens when I step out of my lane. My objective was to stimulate further conversation and share a perspective I thought was unique. I can see this was not welcome, my apologies.


You should revisit this in a year, I think you'll understand how you came off a bit better. TL;DR: horrible idea to show up gossiping to cast aspersions, then disclaim responsibility because it's by proxy and you didn't understand, on a professional specialist site.

Your rationale for your choices doesn't really matter, you made the choice to cast aspersions, repeatedly, on multiple stories in multiple comments.

Handwaving about how the truth will come out through discussion, even when you repeatedly cast aspersions you disclaim understanding of, while also expressing surprise you got a reply, while also always making up more stuff in service of justifying the initial aspersions is indicative of how coherent your approach seems to the rest of us.

In general, I understand why you feel patronized. Between the general incoherence, this thread, and the previous thread where you're applying not-even-wrong, in the Pauli sense, concepts like LTS and Linux kernel dev to this situation, the only real choice that lets anyone assume the best of you is you're 15-25 and don't have a great grasp of tech yet.

Otherwise, you're just some guy gossiping, getting smarmy about it, with enough of an IQ to explain back to yourself why you didn't mean to do it. 0 idea why someone older and technical would do all that on behalf of software they don't use and don't understand.


Why bother to be on a discussion forum if you're so pressed for time that you're lashing out at people agreeing with you and consider any interlocution a waste of time?

From the outside, it looks like all you get out of this is feeling upset, and it makes us wonder how you misread so wildly.


I didn't misread anything. refulgentis edited his comment after I replied, and the issue seems to stem from his failure to understand the comment I replied to. Without that context, my reply obviously won't make sense either.

The comment I replied to is pretty standard feminist dreck, and if you can't understand it, maybe interlocute them first? Debate is one thing, spoon-feeding you explanations of someone else's comment is another.

Still, I'll give you a hint: both pessimizer and I agree that men and women are physically different - again, I did not misread them in that regard. The difference is that I take that to a logical conclusion, while they are bound by ideology to stop short.


What, exactly, did my edits change?

Here's the second sentence of this post you've claimed twice now believes men and women are no different: "There is a massive part of the West who believes that to acknowledge that women are physically different than men is the real sexism."

Genuinely, I hope you're well.


> Here's the second sentence of this post you've claimed twice now believes men and women are no different

I'm literally not claiming that, lmao. Sorry, but I won't be replying to you further because you're clearly functionally illiterate.

What I said was that:

> both pessimizer and I agree that men and women are physically different

If you don't understand a chain of comments, start from the top, and stop wasting my time.


Here's the second sentence of this post you've claimed twice now believes men and women are no different: "There is a massive part of the West who believes that to acknowledge that women are physically different than men is the real sexism."


> midpoint of the 0-1 luminance range

There are two physical quantities for luminance, relative, and perceptual, so that passed along a nugget for those not as wise as you who might not know that :) As you know and have mentioned, using 0.5 with the luminance calculation you mentioned, for relative luminance, would be in error (I hate being pedantic, but it's important for some parties, a11y is a de facto legal requirement for a lot of work, and 0.5 would be spot on for ensuring WCAG 2 text contrast as long as used with perceptual luminance, L*)

> doesn't align with human perception

It is 100% aligned with how humans perceive brightness, in fact, it's a stable work product dating back to the early 1900s.

> Ultimately one could use 0.18-0.3 as threshold

Perceptual luminance and relative luminance have precise mathematical definitions, one can be calculated in terms of the other.

If you need to hit contrast K with background color C, you won't be able to treat this as variable. What you pass along about it being variable is valuable, of course, in that, given K and C, output has a range, i.e. if contrast algo says you need +40 L* for your text to hit APCA/WCAG whatever, and your C has 50 L*, your palette is everything from 90 L* to 100 L* and 0 L* to 10 L*.


So 0.5 is correct after all?! I thought I was completely off with 0.5 and I thought it does not align with human perception because I thought I was wrong. Ouch. In my defense, it has been a while. :D

BTW, would this relatively simple way to determine if the color is dark work?

  $luminance = 0.299 * $r + 0.587 * $g + 0.114 * $b;
  return $luminance < $threshold;
Where $threshold is 128, I think? IIRC 128 is a common threshold from what I remember, in this case.


I once did it with 0.3R + 0.6G + 0.1B < 128, mostly because I could not be bothered to think deeper than that. It was certainly not perfect — there were obvious cases where the opposite choice of black or white for contrast would have been better — but it worked well enough for my purpose (making a label at least not unreadable regardless of background colour).


I implemented something where contrast matters a lot, so I am using more sophisticated algorithms, but yeah, there are cases where you could get away with less sophisticated ones. :D


Be careful when rushing. Your viewpoint as expressed is perfectly rational.

However, you know that both of these claims asked about are blatantly false, and were distracted by the idea that saying those claims is false also implies all alternatives proposed are based on lies.

To wit, the ask was "Is it really true that there a no theories that are proven or discarded with this experiment, and that the Chinese have plans to do it much faster? Her video is pretty damning."

Both of those things are clearly false.

The Chinese part is blatantly false, to the point it can be worked out by a laymen who knows years ascend.

The Standard Model itself is in question, modulo semantics about proven/disproven and the philosophy of certainty, by any reasonable definition, theory is at stake.


I lay out above, the IEEE article linked lays out, and you come across to me as having domain knowledge to understand that having electron-positron collisions at the same energy level of LHC lets us nail down the hints of what we saw at LHC -- persistent deviations in the standard model that require new theory.

When we get new theory, then we go hunting new particles, presuming its physically possible (as you point out with the incorrect idea that this might be being built to look for confirmation of string theory)

I understand the idea this won't find new particles, is it worth it?, but the idea this is unclear, confusing, misguided, or hoping for an outcome are trivially verifiable as false.

Things like:

- "The scientific goals...are unclear" (they are very clear!)

- "(modulo some error)" (reducing the error in the glimpses of deviation from the standard model is the interesting part, 5 sigma or bust, because that lets the theorists know how to progress. This isn't just "oh we'd like to reduce error bars, a less-entitled discipline would just get some grad students on SPSS", this is "holy shit...looks like we found something is fucky in our fundamentals here, but all we know is its off. we need to figure out by how much to give the theorists more data")

- "string theory et al" (I worry very much about the effectiveness of my communication if this is coming up, to be clear, no one is attempting to verify string theory, and it doesn't come up at all even in Sabine's arguments, no? )

The IEEE article lays out this is not about discovering particles.

No one thinks new particles will be discovered.

The investment is not based on speculating new particles will be discovered.

The investment is not based on bad theory that new particles will be discovered.

The investment is not to find a sneaky way to hopefully accidentally find new particles.

Investments in colliders in general haven't been spectulatively looking for new particles in decades.

As both the IEEE, open source information, and my comment lay out above, they are specifically for nailing down these previously-assumed-settled values in the standard model. Because getting more data on the things theory can't explain leads to informed revisions in the theory. The next pendulum swing after that data would be theory to tell us a narrow band of energies to look at for any new particles theory needed to fix the standard model.


I don't care about Sabine and I'm not defending her. There are lots of other people who think this is a bad idea, and Nature has quoted "dozens" of them.

The error they saw isn't interesting unless it leads to something. There aren't even good theories about what it might lead to, other than some extra significant figures on some constants that nobody uses. Surely you can see there is a problem with doing science this way.

Theory precedes experiment. It always has, and you can't call what you're doing "science" unless that is true.


> Theory precedes experiment. It always has

This is laughably false, even in fundamental physics.

No one saw neutrinos coming for example.


> No one saw neutrinos coming for example.

Other than Pauli, you mean? It was hypothesized around 1930. Discovered in 1934.

The basis of scientific method is following:

1. You notice something.

2. You ask a question.

3. Form a hypothesis.

4. Experiment.

5. Analyze.

6. Draw conclusions.


> Other than Pauli, you mean?

After linear momentum was missing in collisions!

Pauli speculated that neutrinos would exist after an experimental anomaly.

No one really saw it coming.

No one saw 3 generations coming either, for example.

In fact, people were baffled.


> In fact, people were baffled.

Yes, but what is FCC supposed to look for?

If it's just Dark Matter, and Matter/Anti-matter asymmetry, what theoretical framework is it going to explain it? Will it explain it, or will it just do "Your asymmetric partners are in another Order of Magnitude collider"? Or maybe there are actually 34 dimensions and not like four.


> No one thinks new particles will be discovered.

Blatantly false. Plenty of FCC docs from CERN itself mention the possibility that new particles could be discovered, from dark matter to axions. They even think they could help gather data to guide searches for supersymmetric partners.

[edited to add links and quotes]

https://fcc-cdr.web.cern.ch/reports/EPPSU18_FCCint.pdf

> In addition to the dark matter examples given before, Volume 1 documents the extraordinary sensitivity to less-than-weakly coupled particles, ranging from heavy sterile neutrinos (see Fig. 5, right) down to the see-saw limit in a part of parameter space favourable for generating the baryon asymmetry of the Universe, to axions and dark photons.

https://fcc.web.cern.ch/physics

> Future searches at lepton and proton colliders would further constrain any viable scenarios and put progressively tighter bounds to SUSY candidate particles. Searches could profit from data collected at the FCCs as they will allow better discrimination of the Standard Model backgrounds but also deliver more information for event reconstruction.


Supersymmetry is on its last legs after the LHC didn't find any supersymmetric particles. WIMPs and other dark matter particles are now no longer speculated to be on the menu because they are too light for this energy range.

There's lots of "could" in your own post and your sources. Very little "will" - as in "will test X theory."


The poster above was claiming, in several posts, that the people at CERN and experimental particle physicists more broadly are being unfairly represented by claims that they are including possible new particles in the case for building the FCC. They even found an IEEE publication about it that (apparently) made no such claims and stuck to well motivated physics.

I was merely showing that there is nothing unfair about it, as all materials about the FCC, at least from CERN, come with beliefs about the chance that new particles could be found. Sure, they don't make hard claims that they will be found. But even these claims that they could be found are unfounded. It's just as likely that I'll spot a WIMP in my oven if I look carefully while it's pretty hot as WIMPs being found at the FCC. This speculation has no place in serious discussions about this level of spending and human effort.

If this were an abstract discussion at a panel and someone was asking "what are some speculations about what we could see at the FCC", it would be perfectly fine to go on about SUSY and dark matter detection and axions and whatever else. But this has no place whatsoever in official documents about the scientific purpose of allocating billions of euros to this project. It is blatant speculation to pad out an otherwise pretty thin motivation. It's like writing a proposal for a new build system at your company and including speculation that it might detect security vulnerabilities automatically, or it might reduce build times a hundred fold.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: