Your points and examples are valid. However, when you say:
> I wouldn’t agree at all that wealthy people are inherently more resilient to stress
I beg to differ.
I think the OP is talking about growing up in poverty. Repeated stress with no relief, which is the condition of poor people in society, has been shown to affect their resilience. (Sorry, I don't have references handy, but these should be easy to find).
And the pendulum swings back toward representation. It is becoming clear that the LLM approach is not adequate to reach what John McCarthy called human-level intelligence:
Between us and human-level intelligence lie many problems. They can be summarized as that of succeeding in the "common-sense informatic situation". [1]
> It is becoming clear that the LLM approach is not adequate to reach what John McCarthy called human-level intelligence
Perhaps paradoxically, if/as this becomes a consensus view, I can be more excited about AI. I am an "AI skeptic" not in principle, but with respect to the current intertwined investment and hype cycles surrounding "AI".
Absent the overblown hype, I can become more interested in the real possibilities (both immediate, using existing ML methods; and the remote, theoretical capabilities follow from what I think about minds and computers in general) again.
I think when this blows over I can also feel freer to appreciate some of the genuinely cool tricks LLMs can perform.
If you made it your business to publish a newsletter containing copied NYT articles, then wouldn't they have the right to go after you and discover your sent emails?
Exactly, they wouldn't even need all of the emails in gmail for that example, just the ones from a specific account.
The real equivalent here would be if gmail itself was injecting NYT articles into your emails. I'm assuming in that scenario most people would see it as straightforward that gmail was infringing NYT content.
You don't even need insider info - it lines up with external estimates.
We have estimates that range from 30% to 70% gross margin on API LLM inference prices at major labs, 50% middle road. 10% to 80% gross margin on user-facing subscription services, error bars inflated massively. We also have many reports that inference compute has come to outmatch training run compute for frontier models by a factor of x10 or more over the lifetime of a model.
The only source of uncertainty is: how much inference do the free tier users consume? Which is something that the AI companies themselves control: they are in charge of which models they make available to the free users, and what the exact usage caps for free users are.
Adding that up? Frontier models are profitable.
This goes against the popular opinion, which is where the disbelief is coming from.
Note that I'm talking LLMs rather than things like image or video generation models, which may have vastly different economics.
> We also have many reports that inference compute has come to outmatch training run compute for frontier models by a factor of x10 or more over the lifetime of a model.
Dario Amodei from Anthropic has made the claim that if you looked at each model as a separate business, it would be profitable [1], i.e. each model brings in more revenue over its lifetime than the total of training + inference costs. It's only because you're simultaneously training the next generation of models, which are larger and more expensive to train, but aren't generating revenue yet, that the company as a whole loses money in a given year.
Now, it's not like he opened up Anthropic's books for an audit, so you don't necessarily have to trust him. But you do need to believe that either (a) what he is saying is roughly true or (b) he is making the sort of fraudulent statements that could get you sent to prison.
He's speaking in a purely hypothetical sense. The title of the video even makes sure to note "in this example". If it turned this wasn't true of anthropic, it certainly wouldn't be fraud.
Smalltalk-80 was also good for graphics programming.
Around 1990, I was a graduate student in Prof. Red Whittaker's field robotics group at Carnegie Mellon. In Porter Hall, I was fortunate to have a Sun 3/60 workstation on my desk. It had a Smalltalk-80. I learned to program it using Goldberg & Robson and other books from ParcPlace Systems.
The programming environment was fantastic, better than anything I have seen before or since. You always ran it full screen, and it loaded up the Smalltalk image from disk. As the article says, you were in the actual live image. Editing, running, inspecting the run-time objects, or debugging: all these tasks were done in the exact same environment. When you came into the office in the morning, the entire environment booted up immediately to where you had left it the previous day.
The image had objects representing everything, including your screen, keyboard, and mouse. Your code could respond to inputs and control every pixel on the screen. I did all my Computer Graphics assignments in Smalltalk. And of course, I wrote fast video games.
I used the system to develop programs for my Ph.D thesis, which involved geometric task planning for robots. One of the programs ran and displayed a simulation of a robot moving in a workspace with obstacles and other things. I had to capture many successive screenshots for my papers and my thesis.
Everybody at CMU then wrote their papers and theses in Scribe, the document generation system written by Brian Reid decades earlier. Scribe was a program that took your markup in a plain text file (sort of at a LaTeX level: @document, @section, etc.) and generated Postscript for the printer.
I never had enough disk space to store so many full screen-size raster images. So, of course, instead of taking screenshots, I modified my program to emit Postscript code, and inserted it into my thesis. I had to hack the pictures into the Postscript generation process somehow. The resulting pictures were vector graphics using Postscript commands. They looked nice because they were much higher resolution than a screenshot could have been.
My understanding is no, if I understand what people mean by systolic arrays.
GreenArray processors are complete computers with their own memory and running their own software. The GA144 chip has 144 independently programmable computers with 64 words of memory each. You program each of them, including external I/O and routing between them, and then you run the chip as a cluster of computers.
How do you use qpdf for extraction when its README states “qpdf does not render PDFs or perform text extraction, and it does not contain higher-level interfaces for working with page contents.”
Not the person you're replying to, but when they said "extraction" I believe they're talking about extracting pages from a PDF (like "splitting" the PDF apart, page-wise), not text. At least that's a thing I've used qpdf for in the past.
Which is also what the "extract" button does in Adobe Acrobat Pro DC for Professional Enterprise Customers or whatever they're calling it now, so it's arguably a term of art for PDFs.
(2) is just a typo but as for (1) “metal–organic” correctly uses an en dash, and this is quite nice to see. They're consistently using the en dash even in their tweets etc, which is lovely.
Probably written by Swedish persons, we also use -s suffixes in many places but basically never with apostrophes so using them when writing English can be a bit hard to get correct (and vice-versa going back to Swedish it's easy to add them in the wrong places).
1. Very few people these days understand the difference between hyphens, en-dashes, and em-dashes. And then converting fonts and character sets on the internet adds another layer of error generation. We could settle on using a single '-' for hyphen and en-dash and a ' -- ' for em-dashes in fonts that don't have a ligature, but that hasn't carried down from the typewriter days for some reason. Microsoft Word is probably a big part of why.
Thanks for posting. Long video, but at about the 12-minute mark he says something about warranty work, relevant to TFA. He says that the hourly rate for any work done under warranty is set artificially low by the manufacturer.
Say a dealership would normally charge a customer $200 an hour for a job, of which the technician will make about $50 an hour. But if the same work is done under warranty, i.e., the manufacturer is paying, the dealership will get only $100 an hour. The dealership, in turn, will reduce the technician's pay to less than $50. Thus, the manufacturer is squeezing the dealership, which is squeezing the tech.
> I wouldn’t agree at all that wealthy people are inherently more resilient to stress
I beg to differ.
I think the OP is talking about growing up in poverty. Repeated stress with no relief, which is the condition of poor people in society, has been shown to affect their resilience. (Sorry, I don't have references handy, but these should be easy to find).
reply