It's an opinion piece posted in the Technology section. Opinion pieces can be part of sections other than the dedicated op/ed section of a newspaper.
Everything about this piece marks it as opinion (most notable the language used), if that's what you're getting at.
Also, noted below the piece:
> This is an edited version of the Australian Society of Authors 2024 Colin Simpson Memorial Keynote lecture, titled ‘Creative Futures: Imagining a place for creativity in a world of artificial intelligence’
I hear what you're saying, but The Guardian does have an opinion section this could have been printed in, but they chose to print it in Tech. The UX in Firefox for me, I could barely tell it was even in the Tech section, because the font is small and only bolded, where the rest of the menu options are in normal style. The News heading, however, is quite large, and has a strong red underline.
I think I was fair to call this out, and the article has been flagged by others.
I checked. Apparently you can find this in the Opinion
section of the Australian edition. Makes sense, given that the author of this piece is an Australian novelist.
I really don't like how this looks like regular news, though. Makes me want to write a plugin, using ChatGPT, to show a pop up whenever you're on a news site, the article is classified as an opinion (or is obviously an opinion piece), yet the webpage does not show it as such.
I'm a strong advocate for a united effort to create a training set of the collected works of mankind free to any AI company to use if, for instance, it uses its profits to fund UBI, or some other program to pay us for what they use.
Anything but AGI profits being used to keep score in the Oligarchy Olympics. I agree that is a bridge too far.
"You can use this, except not commercially unless you do such and such" isn't a free license though. Whether it's free matters more than whether it's nonprofit. GNU showed us how software could be for-profit, yet also free.
My friend uses Oko on iOS to tell her when the walk light is on, and when the numbers are counting down. There is also a navigation aid that she doesn't quite feel comfortable using yet, but I will say the Oko app being able to tell her (quite precisely) what is going on with the stoplight has been a big confidence booster for her getting out into the busy world.
The leader in the field is BeMyEyes, of course. They've been working with Microsoft to integrate GPT-4o vision models into their app, with some great success. What we haven't seen yet is the move to live-video image recognition that could come from something like an OrCam or Meta glasses (they recently announced a partnership with Meta). I'm guessing there are serious safety issues with the model missing important information and leading someone vulnerable astray.
OrCam has a new product (woe upon those of us who have the paltry OrCam MyEye2) that the Meta glasses will be competing against at an eye-watering > $4K price point, that seems to do less.
As with the hearing aid industry which recently went over-the-counter causing prices to plummet, the vision aid product category is in temporary disarray as inexpensive new technologies makes their way into a premium-price market.
>the vision aid product category is in temporary disarray as inexpensive new technologies makes their way into a premium-price market.
Thanks for all the info this is very informative! Rarely do I root for Meta but they do seem to be in the best position to create affordable tools that are also safe. It really needs to be 100% as there's no room for hallucinations when you're relying on it to get you across the street safely.
Anyways this is all very exciting and definately makes me a little more enthusiastic about the inevitable integration of these models into everyday life.
I’m not sure if this is exactly what you are referring to, but Anthropic has done a lot of interpretability work on Claude, which they’ve published along with the famous "Golden Gate Claude".^1
"We also find more abstract features—responding to things like bugs in computer code, discussions of gender bias in professions, and conversations about keeping secrets."
ACE of 9. Founded two successful alt-weekly newspapers, worked on world-famous collectible card games, lost it all due to illness that nearly killed me, recovering and slowly making my way back.
One thing a high ACE score did for me was make me an irresistible force, even as it seemed to turn the rest of the world into immovable objects.
The US had a president for eight years who was re-elected on his ability to act on his “gut reaction”s.
Not saying this is ideal, just that it isn’t the showstopper you present it as. In fact, when people talk about “human values”, it might be worth reflecting on whether this a thing we’re supposed to be protecting or expunging?
"I'm not a textbook player, I'm a gut player.” —President George W. Bush.
Sometimes it seems as if convenience and price are the technologies that are at the top of the list of consumers in the US market, and that other considerations come in a distant third. We seem to pay little attention to our collective goodwill these days.
I hope we can start putting our money where our mouth is, though.
(Adding on instead of making a new comment)
These are the kind of personal robots I am more interested in. I'm imagining an autonomous, voice-controlled (this isn't exactly that, but it's very close) little robot buddy, like a Destiny 2 Ghost. This is productized to be something different than that, but maybe this is a closer form factor (with what seems like useful protective nacelles to prevent it from damaging its environment).
It has wifi, and it wouldn’t be a stretch to have a 5G modem in a more expensive model (DJI has a deep bench at this point, all the way up to agriculture and delivery drones that have serious radio communication arrays).
I feel like with all the companies rushing to humanoid robotics, there should be a place for alternate forms like little drone buddies?
There seems like there could be a really sweet price-to-performance ratio that could open up a $1K or 2K personal robot that isn’t about doing tasks, but about remote sensing and processing?
Right. At my school, the accessibility option for tests is to give us more time. A three hour test would be made "accessible" by extending it to six hours or even more.
I was really hoping AI would make our world more accessible, not less.
(eta) Additionally, it would take more instructor or docent time, because no one can't be trusted to actually learn the material we're paying tens or even hundreds of thousands of dollars for.
It ain't the future dystopia I'm afraid of, it's the one we're creating this week.
https://soundcloud.com/wort-fm/christine-wenc-on-the-legacy-...
https://soundcloud.com/hachetteaudio/funny-because-its-true-...
https://www.nytimes.com/2025/03/12/books/new-nonfiction-book...