Hacker News new | past | comments | ask | show | jobs | submit | rhdjsjebshjffn's comments login

I don't see how that's any better for a haskell shop. i got some empathy but you chose a rough life.


The trick is to hire people who are familiar with the language.


> You can well imagine how important predicting tides would have been for D-day landing.

Is this intended to communicate positivity or negativity?

Predicting tides was known to the ancients; it would be lovely to explore the hubris of the modern narrative.

Edit: fundamentally, if hacker news has taught me anything, it's that "downvote = makes me feel bad and doesn't want to answer questions". The entire concept of democratic news aggregation was a lie.


I think there are two ways to interpret that sentence: "it would have been important": one which implies tidal prediction was unavailable at D-day but would have been useful, and one that implies it was indeed available (subjunctive conditional or "the Anderson case", apparently, per Wikipedia)

I don't think anyone is claiming tide times were so unpredictable in 1945.


They were predictable. Interestingly, Rommel misunderstood how tides affected landings. He thought the landings would be done at high tide, so the invading troops wouldn't have to advance across wide expanses of beach. In reality, the allies wanted to invade on a rising tide, so the landing craft, grounded to let out troops, would refloat and be able to move back out. Also, invading at lower tide meant beach obstacles would be exposed and unable to damage the landing craft.


> Edit: fundamentally, if hacker news has taught me anything, it's that "downvote = makes me feel bad and doesn't want to answer questions". The entire concept of democratic news aggregation was a lie.

Guidelines:

> Please don't comment about the voting on comments. It never does any good, and it makes boring reading.

That it feels bad to not win the popular vote does not make democracy a lie, and there's no surprise in not winning favor when blanket discarding the current topic and describing it as "hubris", while not adding any new or constructive information.


> Is this intended to communicate positivity or negativity?

It just says it was important to predict the tides. There is no positivity or negativity to it. Your question doesn’t make sense, hence the downvotes.

> Predicting tides was known to the ancients

Good. To which ancients? With what accuracy and how far into the future? What techniques did they use? Tell us more.

> it would be lovely to explore the hubris of the modern narrative.

Explore it then! Would love to read it. It is not like there is some conspiracy holding you back.


>> Predicting tides was known to the ancients

> Good. To which ancients?

To the ancients of 1944 for sure.


This just seems to illustrate the complexity of compiler authorship. I am very sure c compilers are wble to address this issue any better in the general case.


Keep in mind Rust is using the same backend as one of the main C compilers, LLVM. So if it is handling it any better that means the Clang developers handle it before it even reaches the shared LLVM backend. Well, or there is something about the way Clang structures the code that catches a pattern in the backend the Rust developers do not know about.


I mean yea, i just view rust as the quality-oriented spear of western development.

Rust is absolutely an improvement over C in every way.


The rust issue has people trying this with c code and the compiler generates the same issue. This will get fixed and it’ll help c and Rust code


Out of curiosity just clang or gcc as well?


I just tried it, and the problem is even worse in gcc.

Given this C code:

    typedef struct { uint16_t a, b; } pair;

    int eq_copy(pair a, pair b) {
        return a.a == b.a && a.b == b.b;
    }
    int eq_ref(pair *a, pair *b) {
        return a->a == b->a && a->b == b->b;
    }
Clang generates clean code for the eq_copy variant, but complex code for the eq_ref variant. Gcc emits pretty complex code in both variants.

For example, here's eq_ref from gcc -O2:

    eq_ref:
        movzx   edx, WORD PTR [rsi]
        xor     eax, eax
        cmp     WORD PTR [rdi], dx
        je      .L9
        ret
    .L9:
        movzx   eax, WORD PTR [rsi+2]
        cmp     WORD PTR [rdi+2], ax
        sete    al
        movzx   eax, al
        ret
Have a play around: https://c.godbolt.org/z/79Eaa3jYf


Quantum key distribution is a real thing. It's a very specific set of constraints that would make it generally attractive, though, namely not relying on networks of trust to function, maybe? Key distribution as a dumb pipe.


I'm not asking if it's "a real thing". I'm asking if it's used anywhere, either commercially or -- in any case -- to encrypt actually sensitive data outside of tech demonstrations. I'm not aware of anything. China has a "qkd backbone" but if they're actually using it for anything is anyone's guess.


This seems like a really dumb thing to emphasize for marketing ends outside of actual competency. But what else is new!


I'm having trouble parsing your comment... What is a dumb thing? Who is emphasizing what?


Ah, understandable.

> what is a dumb thing?

Emphasizing quantum key control. In the situations where it's really relevant you don't need buy-in from investors, the value is evident.

Sayonara!


I think that was more of a language-oriented effort rather than runtime/abi oriented effort.


Parrot was intended to be a universal VM. It wasn’t just for Perl.

https://www.slideshare.net/slideshow/the-parrot-vm/2126925


Sure, I just think that's a very odd way to characterize the project. Basically anything can be universal vm if you put enough effort to reimplementing the languages. Much of what sets Parrot aside is its support for frontend tooling.


“The Parrot VM aims to be a universal virtual machine for dynamic languages…”

That’s how the people working on the project characterized it.


I certainly think the humor in parrot/rakudo (and why they come up today still) is how little of their own self image the proponents could perceive. The absolute irony of thinking that perl's strength was due to familiarity with text-manipulation rather than the cultural mass....


I'm excited about this too, but it's a little concerning there's a brand in the title. There's no shortage of those from ati, intel, amd, apple, ibm, the game gaggle, etc to interview. The fact that nvidia succeeded where others failed is largely an artifact of luck.


I would say nVidia made their own luck. A lot of their success can be attributed to their management never losing sight of the fact that the software is just as important as the hardware. Both drivers and CUDA are key to nVidia's success. ATI and nVidia would trade places on quality of hardware, but there was never a question on the software side.


I'm not sure how nvidia's driver track record would have helped them, but drivers nor linux nor software in any way has ever really been nvidia's strong-suit. but even with the popularity of it CUDA cannot explain nvidia's success alone; you also need the demand of butcoin and the secondary-but-farcical imitation of LLMs but also the inexplicable lack of awareness of alternatives that need explaining...


I worked on CUDA and OpenCL in the 2010-2014 timeframe, well before buttcoin and LLMs were profit centers, and Nvidia was already well ahead in the "GPUs as general compute" area. Literally everyone doing highly parallel HPC wanted to use Nvidia, despite AMD having higher throughout for some workloads. It was better, easier to use software.


I'll add to that: even though it is true that "drivers nor linux nor software in any way has ever really been nvidia's strong-suit", as GP put it, their software was still miles ahead of its competitors. In the land of the blind a one-eyed man is king, and all that.


I definitely recall Nvidia cards were consistently priced higher than AMD cards of the equivalent hardware as early as 2011.

So judging by actions, not words, they had a clear software/firmware advantage by then already.


I would agree if I didn't associate nvidia with incomprehensibly bad support for drivers. Idk, maybe this is a linux-only thing, but it's hard to imagine a vendor with a worse reputation for delivering functional software. Only perhaps microsoft itself has tried to be even more anti-consumer in their approach to support.

Cuda is ok, but it's the sheer mass of people targeting the platform that makes it more interesting than eg opencl. It hasn't don't anything clearly better aside from aggressively trying to court developer interest. It will pass and we will have stronger runtimes for it.

What really sets nvidia apart is its ability to market to people. Truly phenomenal performance.


Yea sure I see what you mean. So can you tell me what reputation AMD has for CUDA support on Linux? Or any of the other GPU providers?

They have none because their driver support is nonexistent. That's why everyone under the sun uses Nvidia despite their abysmal software support: it's still better than everyone else.


The author worked at Nvidia and explains some of the engineering decisions made at the time. Why shouldn’t the brand be in the title?


Because the author is far more interesting than the brand? I'm not sure what would justify branding your personal observations outside of bandwagoning onto hopeful investors.


> largely an artifact of luck.

I disagree with "largely". Luck is always a factor in business success and there are certainly some notable examples where luck was, arguably, a big enough factor that "largely" would apply - like Broadcast.com's sale to Yahoo right at the peak of the .com bubble. However, I'm not aware of evidence luck was any more of a factor in NVidia's success than the ambient environmental constant it always is for every business. Luck is like the wind in competitive sailing - it impacts everyone, sometimes positively, sometimes negatively.

Achieving and then sustaining substantial success over the long run requires making a lot of choices correctly as well as top notch execution. The key is doing all of that so consistently and repeatedly that you survive long enough for the good and bad luck to cancel each other out. NVidia now has over 30 years of history through multiple industry-wide booms, downturns and fundamental technology transitions - a consistent track record of substantial, sustained success so long that good luck can't plausibly be a significant factor.

That said, to me, this article didn't try to explain NVidia's long-term business success. It focused on a few key architectural decisions made early on which were, arguably, quite risky in that they could have wasted a lot of development on capabilities which didn't end up mattering. However, they did end up paying off and, to me, the valuable insight was that key team members came from a different background than their competitors and their experiences with multi-user, multi-tasking, virtualized mini and mainframe architectures caused them to believe desktop architectures would evolve in that direction sooner rather than later. The takeaway being akin to "skate to where the puck is going, not where it is." In rapidly evolving tech environments, making such predictions is greatly improved when the team has both breadth and depth of experience in relevant domains.


Nvidia’s Cg language made developers prefer their hardware, I’d say.


Which influenced HLSL, with their close collaboration with Microsoft on DirectX.


Concerned about a "brand in the title"? What do you expect?

The author is David Rosenthal, who was employee #4 at Nvidia (Chief Scientist).

He's not some random historian or interviewer. That's his life experience.


Sure, if you still think the word has meaning.


Yes, I do. Any way you slice this term, it looks close to what ML models are learning through training.

I'd go as far as saying LLMs are meaning made incarnate - that huge tensor of floats represents a stupidly high-dimensional latent space, which encodes semantic similarity of every token, and combinations of tokens (up to a limit). That's as close as reifying the meaning of "meaning" itself as we ever come.

(It's funny that we got there through brute force instead of developing philosophy, and it's also nice that we get a computational artifact out of it that we can poke and study, instead of incomprehensible and mostly bogus theories.)


ML is a kind of memorization, though.


Anything can be a kind of something since that's subjective...


But it’s more kind of memorization than understanding and reasoning


Yea, anything can be a kind of something else. :clown:

Bruh. Do you need to be paid to interact in good faith or were you raised to be social


I can't speak for anyone else, but these models only seem about as smart as google search, with enormous variability. I can't say I've ever had an interaction with a chatbot that's anything redolent of interaction with intelligence.

Now would I take AI as a trivia partner? Absolutely. But that's not really the same as what I look for in "smart" humans.


> But that's not really the same as what I look for in "smart" humans.

Note that "smarter than smart humans" and "smarter than most humans" are not the same. The latter is a pretty low bar.


Have you tried any SOTA models like o3?

If not, I strongly encourage you to discuss your area of expertise with it and rate based on that

It is incredibly competent


SOTA models can be pretty smart, but this particular model is a very far cry from anything SOTA.


I'm not really sure what to look for, frankly. It makes a rather uninteresting conversation partner and its observations of the world bland and mealy-mouthed.

But potentially maybe I'm just not looking for a trivia partner in my software.


The image description capabilities are pretty insane, crazy to think it's all happening on my phone. I can only imagine how interesting this is accessibility wise, e.g. for vision impaired people. I believe there are many more possible applications for these on a smartphone than just chatting with them.


>anything redolent of interaction with intelligence

compared to what you are used to right?

I know it's elitist but most people <=100 iq (and no, this is not exact obviously, but we have not many other things to go by) are just ... well, a lot of state of the art LLMs are better at everything compared, outside body 'things' (for now) of course, as they don't have any. They hallucinate/bluff/lie as much as the humans and the humans might know they don't know, but outside that, the LLMs win at everything. So I guess that, for now, people with 120-160 iqs find LLMs funny but wouldn't call them intelligent, but below that...

My circle of people I talk with during the day has changed since I took on more charity which consists of fixing up old laptops and installing Ubuntu on them; I get them for free from everyone and I give them to people who cannot afford, including some lessons and remote support (which is easy as I can just ssh in via tailscale). Many of them believe in chemtrails, vaccinations are a gov ploy etc and multiple have told me they read that these AI chatbots are nigerian or indian (or so) farms trying to fraud them out of 'things' (they usually don't have anything to fraud otherwise I would not be there). This is about half of humanity; Gemma is gonna be smarter than all of them, even though I don't register any LLM as intelligence and with the current models, it won't happen either. Maybe a breakthrough in models will be made that changes it, but it has not much chance yet.


> but most people <=100 iq

This is incorrect, IQ tests are normally scaled such that average intelligence is 100, and such that they are approximately normally distributed so that most people will be somewhere between 85-115 (66% on average).


> but most people <=100 iq

> average intelligence is 100

You both are saying the same thing.

IQ is defined such that both average and mean would be equal 100. The combination of sub-100 and exactly-100 would be more people than above-100, hence "most people <=100 iq".


Both average and mean, you say.


Oops, I meant both median and mean.


Frankly I have no clue what value the term "average" has after trying to follow this conversation.


Yep and those people can never 'win' against current llms, let alone future ones. Outside motorcontrol which I specifically excluded.

85 is special housing where I live... LLMs are far beyond that now.


I'm not convinced this is true. I suspect that they'd mostly fail a version of Ravens matrices that didn't appear in the training set.


How are you living such that you're regularly pitting humans against computers

Not only is this unbelievable, it's reprehensible


Judging from your comment, it seems that your statistical sample is heavily biased as well, as you are interacting with people that can't afford a laptop. That's not representative of the average person.


I agree; but what are we supposed to do? Insight has always been paired with the empathy of not having it but

I refuse to touch the IQ bait


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: