Speech recognition, as described above, is an AI too :) These LLMs are huge AIs that I guess could eventually replace all other AIs, but that’s sort of speculation no one with knowledge of the field would endorse.
Separately, in my role as wizened 16 year old veteran of HN: it was jarring to read that. There’s a “rules” section, but don’t be turned off by the name, it is more like a nice collection of guidelines of how to interact in a way that encourages productive discussion that illuminates. One of the key rules is not to interpret things weakly. Here, someone spelled out exactly how to do it, and we shouldn’t then assume its not AI, then tie to a vague demeaning description of “AI hype”, then ask an unanswerable question of what’s the point of “AI hype”.
If you’re nontechnical, to be clear, it would be hard to be nontechnical and new to HN and know how to ask that a different way, I suppose.
> There’s a “rules” section, but don’t be turned off by the name, it is more like a nice collection of guidelines of how to interact in a way that encourages productive discussion that illuminates. One of the key rules is not to interpret things weakly. Here, someone spelled out exactly how to do it, and we shouldn’t then assume its not AI, then tie to a vague demeaning description of “AI hype”, then ask an unanswerable question of what’s the point of “AI hype”.
I don’t know what either of those mean in this context, and I used VB6 for a couple years at least and have been programming ObjC and / or Swift since 2006, with some time in Rails over a couple years.
I’m extremely confused by your comment, it’s apparently near verboten in polite company, yet, manages to say nothing other than that while invoking several things of which I’m quite familiar.
If you are destroyed, I anticipate it will be for a quarter baked, horrible, analogy between ObjC/Swift (or is it Ruby/Swift)? and VB6/VB.NET that somehow has something to do with Ruby.
That’s more done by ex. Compal than shrinking Intel, the myth you could trust that was shattered by their insistence up until 4 months before release date that Haswell(?) was going to hit its thermal envelope and perf targets. In 2018, iirc, that was the beginning of the end. Apple had to ship a MacBook generation that struggled with thermals for 3 years and decided to never again be put in that position. Similarly at other important OEMs.
Probably not EU grant bureaucrat nepotistic corruption: all we have so far is A) FBI involvement B) el cheapo fake organization, claiming they are French, with a lowrent pressure campaign on behalf of commercial entities.
Smells like freedom fries to me (am American myself)
"Only in the world of AI algorithm training can you claim that you were torrenting 2,400 porn videos for personal use and have that seem like the lesser of two evils."
What happened to Vice?
Is it a private equity content mill now? (a la newsweek, ex-gawker properties)
(I'm asking because moralizing about torrenting porn is not a very 2010s vice thing, and I lost track of it)
Apologies that you're taking on the chin here. Generally, I'll just skip fantastical HN threads with a critical mass of BS like this, with pity, rather than an attempt to share (for more on that c.f. https://news.ycombinator.com/item?id=45929335)
Been on HN 16 years and never seen anything like the pack of people who will come out to tell you it doesn't work and they'll never pay for it and it's wrong 50% of the time, etc.
Was at dinner with an MD a few nights back and we were riffing on this, came to the conclusion is was really fun for CS people when the idea was AI would replace radiologists, but when the first to be mowed down are the keyboard monkeys, well, it's personal and you get people who are years into a cognitive dissonance thing now.
I want AI to be as strong as possible. I want AGI, I especially want super intelligence. I will figure out a new and better job if you give me super intelligence.
The problem is not cognitive dissonance, the problem is we don't have what we are pretending we have.
We have the dot com bubble but with a bunch of Gopher servers and the web browser as this theoretical idea yet to be invented and that is the bull case. The bear case is we have the dot com bubble but still haven't figured out how to build the actual internet. Massive investment in rotary phone capacity because everyone in the future is going to be using so much phone dial up bandwidth when we finally figure out how to build the internet.
Yeah, it really pulled the veil away, didn't it? So much dismissiveness and uninformed takes, from a crowd that had been driving automation forward for years and years and you'd think they'd get more familiar with these new class of tools, warts and all.
Say what exactly? Driving automation of all kind with Claude Code level tools has been incredibly fruitful. And once you spent sufficient time with them you know when and where they fall on their faces and when they provide real tangible reproducible benefits. I could not care less for the AI hype or bubble or whatever, I just use what I see works as I'm staring these tools down for 10h+/day.
The problem is that these conversations are increasingly drifting apart as everyone has different priors and experiences with this stuff. Some are stuck in 2023, some have so very specialized tasks that it's more work whipping the agent in line that it saves, others found a ton of automation cases where this stuff provides clear net benefits.
Don't care for AGI, AI girlfriends or LLM slop, but strap 'em in a loop and build a cage for them to operate in without lobotomizing themselves and there's absolutely something to be gained there (for me, at least).
It's not actually subsidized and the economics of smaller/self-hosted models are a much, much, worse nightmare (source: guy who spent last 2 years maintaining llama.cpp && any provider you can think of) (why is it bad? same reason why 20 cars vs. 1 bus is bad. same reason why only being able to use transportation if you own a car would be bad)
Source on it being subsidized? :)
(there isn't one, other than an aggro subset of people lying to eachother that somehow literally everyone is losing money, while posting record profit margins)
(https://en.wikipedia.org/wiki/Hitchens%27s_razor)
Generally, I worry HN is in a dark place with this stuff - look how this thread goes, ex. descendant of yours is at "Why would I ever pay for this when it hallucinates." I don't understand how you can be a software engineer and afford to have opinions like that. I'm worried for those who do, genuinely, I hope transitions out there are slow enough, due to obstinance, that they're not cast out suddenly without the skills to get something else.
It's subsidised by VC funding. At some point the gravy train stops and they have to pivot to profit so that the VCs deliver return-on-investment. Look at Facebook shoving in adverts, Uber jacking up the price, etc.
> I don't understand how you can be a software engineer and afford to have opinions like that
I don't know how you can afford not to realise that there's a fixed value prop here for the current behaviour and that it's potentially not as high as it needs to be for OpenAI to turn a profit.
OpenAI's ridiculous investment ability is based on a future potential it probably will never hit. Assuming it does not, the whole stack of cards falls down real quick.
(You can Ctrl-C/Ctrl-V OpenAI for all the big AI providers)
This is all about OpenAI, not about AI being subsidized...with some sort of directive to copy/paste "OpenAI" for all the big AI providers? (presumably you meant s/OpenAI/$PROVIDER?)
If that's what you meant: Google. Boom.
Also, perhaps you're a bit new to industry, but that's how these things go. They burn a lot of capital building it out b/c they can always fire everyone and just serve at cost -- i.e. subsidizing business development is different from subsiziding inference, unless you're just sort of confused and angry at the whole situation and it all collapses into everyone's losing money and no one will admit it.
You're replying to a story about a hyperscaler worrying investors about how much they're leveraging themselves for a small number of companies.
From the article:
> OpenAI faces questions about how it plans to meet its commitments to spend $1.4tn on AI infrastructure over the next eight years.
Someone needs to pay for that 1.4 trillion, that's 2/3 of what Microsoft makes this year. If you think they'll make that from revenue, that's fine. I don't. And that's just the infra.
I'm a big fan and user of AI but I don't see how you can say it's not subsidized. You can't just ignore the costs of training or staff or marketing or non-model software dev. The price charged for inference has to ultimately cover all those things + margin.
Also, the leaked numbers being sent to Ed Zitron suggest that even inferencing is underwater on a cost basis, at least for OpenAI. I know Anthropic claims otherwise for themselves.
sam altman just needs another trillion dollars and then we'll finally have AGI and then the robots will do all the work and everyone will get a million dollars per year in UBI and everything will be perfect
If you're a greybeard, you would have lived this with many, many, companies, and I extremely doubt you'd be so fixated and angry as to mumble through a parody of what you perceive the counterargument as.
to be fair, Worldcom ended in an accounting scandal. I had a friend who worked at UUNET and would sneak me into the megahub in Richardson TX at night to download movies and other large, for the time, files (first time i ever saw an OC148 in real life). UUNET was bought by Worldcom and then after the implosion my friend showed me these gigantic cube farms in the office that were completely empty of people. It was very weird.
Separately, in my role as wizened 16 year old veteran of HN: it was jarring to read that. There’s a “rules” section, but don’t be turned off by the name, it is more like a nice collection of guidelines of how to interact in a way that encourages productive discussion that illuminates. One of the key rules is not to interpret things weakly. Here, someone spelled out exactly how to do it, and we shouldn’t then assume its not AI, then tie to a vague demeaning description of “AI hype”, then ask an unanswerable question of what’s the point of “AI hype”.
If you’re nontechnical, to be clear, it would be hard to be nontechnical and new to HN and know how to ask that a different way, I suppose.