> You can well imagine how important predicting tides would have been for D-day landing.
Is this intended to communicate positivity or negativity?
Predicting tides was known to the ancients; it would be lovely to explore the hubris of the modern narrative.
Edit: fundamentally, if hacker news has taught me anything, it's that "downvote = makes me feel bad and doesn't want to answer questions". The entire concept of democratic news aggregation was a lie.
I think there are two ways to interpret that sentence: "it would have been important": one which implies tidal prediction was unavailable at D-day but would have been useful, and one that implies it was indeed available (subjunctive conditional or "the Anderson case", apparently, per Wikipedia)
I don't think anyone is claiming tide times were so unpredictable in 1945.
They were predictable. Interestingly, Rommel misunderstood how tides affected landings. He thought the landings would be done at high tide, so the invading troops wouldn't have to advance across wide expanses of beach. In reality, the allies wanted to invade on a rising tide, so the landing craft, grounded to let out troops, would refloat and be able to move back out. Also, invading at lower tide meant beach obstacles would be exposed and unable to damage the landing craft.
> Edit: fundamentally, if hacker news has taught me anything, it's that "downvote = makes me feel bad and doesn't want to answer questions". The entire concept of democratic news aggregation was a lie.
Guidelines:
> Please don't comment about the voting on comments. It never does any good, and it makes boring reading.
That it feels bad to not win the popular vote does not make democracy a lie, and there's no surprise in not winning favor when blanket discarding the current topic and describing it as "hubris", while not adding any new or constructive information.
This just seems to illustrate the complexity of compiler authorship. I am very sure c compilers are wble to address this issue any better in the general case.
Keep in mind Rust is using the same backend as one of the main C compilers, LLVM. So if it is handling it any better that means the Clang developers handle it before it even reaches the shared LLVM backend. Well, or there is something about the way Clang structures the code that catches a pattern in the backend the Rust developers do not know about.
I just tried it, and the problem is even worse in gcc.
Given this C code:
typedef struct { uint16_t a, b; } pair;
int eq_copy(pair a, pair b) {
return a.a == b.a && a.b == b.b;
}
int eq_ref(pair *a, pair *b) {
return a->a == b->a && a->b == b->b;
}
Clang generates clean code for the eq_copy variant, but complex code for the eq_ref variant. Gcc emits pretty complex code in both variants.
For example, here's eq_ref from gcc -O2:
eq_ref:
movzx edx, WORD PTR [rsi]
xor eax, eax
cmp WORD PTR [rdi], dx
je .L9
ret
.L9:
movzx eax, WORD PTR [rsi+2]
cmp WORD PTR [rdi+2], ax
sete al
movzx eax, al
ret
Quantum key distribution is a real thing. It's a very specific set of constraints that would make it generally attractive, though, namely not relying on networks of trust to function, maybe? Key distribution as a dumb pipe.
I'm not asking if it's "a real thing". I'm asking if it's used anywhere, either commercially or -- in any case -- to encrypt actually sensitive data outside of tech demonstrations. I'm not aware of anything. China has a "qkd backbone" but if they're actually using it for anything is anyone's guess.
Sure, I just think that's a very odd way to characterize the project. Basically anything can be universal vm if you put enough effort to reimplementing the languages. Much of what sets Parrot aside is its support for frontend tooling.
I certainly think the humor in parrot/rakudo (and why they come up today still) is how little of their own self image the proponents could perceive. The absolute irony of thinking that perl's strength was due to familiarity with text-manipulation rather than the cultural mass....
I'm excited about this too, but it's a little concerning there's a brand in the title. There's no shortage of those from ati, intel, amd, apple, ibm, the game gaggle, etc to interview. The fact that nvidia succeeded where others failed is largely an artifact of luck.
I would say nVidia made their own luck. A lot of their success can be attributed to their management never losing sight of the fact that the software is just as important as the hardware. Both drivers and CUDA are key to nVidia's success. ATI and nVidia would trade places on quality of hardware, but there was never a question on the software side.
I'm not sure how nvidia's driver track record would have helped them, but drivers nor linux nor software in any way has ever really been nvidia's strong-suit. but even with the popularity of it CUDA cannot explain nvidia's success alone; you also need the demand of butcoin and the secondary-but-farcical imitation of LLMs but also the inexplicable lack of awareness of alternatives that need explaining...
I worked on CUDA and OpenCL in the 2010-2014 timeframe, well before buttcoin and LLMs were profit centers, and Nvidia was already well ahead in the "GPUs as general compute" area. Literally everyone doing highly parallel HPC wanted to use Nvidia, despite AMD having higher throughout for some workloads. It was better, easier to use software.
I'll add to that: even though it is true that "drivers nor linux nor software in any way has ever really been nvidia's strong-suit", as GP put it, their software was still miles ahead of its competitors. In the land of the blind a one-eyed man is king, and all that.
I would agree if I didn't associate nvidia with incomprehensibly bad support for drivers. Idk, maybe this is a linux-only thing, but it's hard to imagine a vendor with a worse reputation for delivering functional software. Only perhaps microsoft itself has tried to be even more anti-consumer in their approach to support.
Cuda is ok, but it's the sheer mass of people targeting the platform that makes it more interesting than eg opencl. It hasn't don't anything clearly better aside from aggressively trying to court developer interest. It will pass and we will have stronger runtimes for it.
What really sets nvidia apart is its ability to market to people. Truly phenomenal performance.
Yea sure I see what you mean. So can you tell me what reputation AMD has for CUDA support on Linux? Or any of the other GPU providers?
They have none because their driver support is nonexistent. That's why everyone under the sun uses Nvidia despite their abysmal software support: it's still better than everyone else.
Because the author is far more interesting than the brand? I'm not sure what would justify branding your personal observations outside of bandwagoning onto hopeful investors.
I disagree with "largely". Luck is always a factor in business success and there are certainly some notable examples where luck was, arguably, a big enough factor that "largely" would apply - like Broadcast.com's sale to Yahoo right at the peak of the .com bubble. However, I'm not aware of evidence luck was any more of a factor in NVidia's success than the ambient environmental constant it always is for every business. Luck is like the wind in competitive sailing - it impacts everyone, sometimes positively, sometimes negatively.
Achieving and then sustaining substantial success over the long run requires making a lot of choices correctly as well as top notch execution. The key is doing all of that so consistently and repeatedly that you survive long enough for the good and bad luck to cancel each other out. NVidia now has over 30 years of history through multiple industry-wide booms, downturns and fundamental technology transitions - a consistent track record of substantial, sustained success so long that good luck can't plausibly be a significant factor.
That said, to me, this article didn't try to explain NVidia's long-term business success. It focused on a few key architectural decisions made early on which were, arguably, quite risky in that they could have wasted a lot of development on capabilities which didn't end up mattering. However, they did end up paying off and, to me, the valuable insight was that key team members came from a different background than their competitors and their experiences with multi-user, multi-tasking, virtualized mini and mainframe architectures caused them to believe desktop architectures would evolve in that direction sooner rather than later. The takeaway being akin to "skate to where the puck is going, not where it is." In rapidly evolving tech environments, making such predictions is greatly improved when the team has both breadth and depth of experience in relevant domains.
Yes, I do. Any way you slice this term, it looks close to what ML models are learning through training.
I'd go as far as saying LLMs are meaning made incarnate - that huge tensor of floats represents a stupidly high-dimensional latent space, which encodes semantic similarity of every token, and combinations of tokens (up to a limit). That's as close as reifying the meaning of "meaning" itself as we ever come.
(It's funny that we got there through brute force instead of developing philosophy, and it's also nice that we get a computational artifact out of it that we can poke and study, instead of incomprehensible and mostly bogus theories.)
I can't speak for anyone else, but these models only seem about as smart as google search, with enormous variability. I can't say I've ever had an interaction with a chatbot that's anything redolent of interaction with intelligence.
Now would I take AI as a trivia partner? Absolutely. But that's not really the same as what I look for in "smart" humans.
I'm not really sure what to look for, frankly. It makes a rather uninteresting conversation partner and its observations of the world bland and mealy-mouthed.
But potentially maybe I'm just not looking for a trivia partner in my software.
The image description capabilities are pretty insane, crazy to think it's all happening on my phone. I can only imagine how interesting this is accessibility wise, e.g. for vision impaired people.
I believe there are many more possible applications for these on a smartphone than just chatting with them.
>anything redolent of interaction with intelligence
compared to what you are used to right?
I know it's elitist but most people <=100 iq (and no, this is not exact obviously, but we have not many other things to go by) are just ... well, a lot of state of the art LLMs are better at everything compared, outside body 'things' (for now) of course, as they don't have any. They hallucinate/bluff/lie as much as the humans and the humans might know they don't know, but outside that, the LLMs win at everything. So I guess that, for now, people with 120-160 iqs find LLMs funny but wouldn't call them intelligent, but below that...
My circle of people I talk with during the day has changed since I took on more charity which consists of fixing up old laptops and installing Ubuntu on them; I get them for free from everyone and I give them to people who cannot afford, including some lessons and remote support (which is easy as I can just ssh in via tailscale). Many of them believe in chemtrails, vaccinations are a gov ploy etc and multiple have told me they read that these AI chatbots are nigerian or indian (or so) farms trying to fraud them out of 'things' (they usually don't have anything to fraud otherwise I would not be there). This is about half of humanity; Gemma is gonna be smarter than all of them, even though I don't register any LLM as intelligence and with the current models, it won't happen either. Maybe a breakthrough in models will be made that changes it, but it has not much chance yet.
This is incorrect, IQ tests are normally scaled such that average intelligence is 100, and such that they are approximately normally distributed so that most people will be somewhere between 85-115 (66% on average).
IQ is defined such that both average and mean would be equal 100. The combination of sub-100 and exactly-100 would be more people than above-100, hence "most people <=100 iq".
Judging from your comment, it seems that your statistical sample is heavily biased as well, as you are interacting with people that can't afford a laptop. That's not representative of the average person.