Came here to say pretty much this. Hardware seems more valuable than a model.
I think AI could be commoditized. Look at DeepSeek stealing OpenAI's model. Look at the competitive performance between Claude, ChatGPT, Grok, and Gemini. Look at open weight models, like Llama.
Commoditized AI need used via a device. The post argues that other devices, like watches or smart glasses, could be better posed to use AI. But...your point stands. Given Apple's success with hardware, I wouldn't bet against them making competitive wearables.
Hardware is hard. It's expensive to get wrong. It seems like a hardware company would be better positioned to build hardware than an AI company. Especially when you can steal the AI company's model.
Supply chains, battery optimization, etc. are all hard-won battles. But AI companies have had their models stolen in months.
If OpenAI really believed models would remain differentiated then why venture into hardware at all?
Moreover, Apple owning the access to the device-hardware AND to the data those models would need to create value for an Apple-user makes the company even more robust.
They could manage years of AI-missteps while cultivating their AI "marketplace", which allows the user to select a RevShare'd third party AI if (and only if) Apple cannot serve the request.
It would keep them afloat in the AI-space no matter how far they are behind, as long as the iPhone remains the dominant consumer mobile device.
The only risks are a paradigm shift in mobile devices, and the EU which clearly noticed that they operate multiple uneven digital markets within their ecosystem...
"The only risks" lol. Tim Cook thought those were the only risks to the App Store too, now look where he is. You lack imagination.
What if [Japan|EU|US DOJ|South Korea] passes a law preventing OEMs from claiming user data as their property? If Apple really tries to go down the road of squeezing pre-juiced lemons like this, I think they're going to be called out for stifling competition and real innovation.
Go further - what if those entities pass laws preventing Meta, or Google from claiming user data as their property? Or even the AI companies that are siphoning content from the web.
In practice, it's not that restricting. It's pretty rare to find something actually requiring a more recent version. (And there's slow progress in updating it)
Randy Eckenrode is doing work pushing this forward almost every day, and constantly posting little updates on Matrix in the Nix on macOS channel. If anyone reading this is invested in that work, you can track it just by idling in the room. I'm sure he'd appreciate help or even just sincere expressions of encouragement or gratitude.
Granted, it’s not restrictive if you only want to use Nix for general utilities and Unix libraries. But it’s extremely restricting if you want to use Nix to manage macOS apps. And I love Nix, so of course I want to do that :)
This post contains a number of statements that mislead the reader into believing that we are overreacting to memory safety bugs. For example:
> An oft-quoted number is that “70%” of programming language-caused CVEs (reported security vulnerabilities) in C and C++ code are due to language safety problems... That 70% is of the subset of security CVEs that can be addressed by programming language safety
I can’t figure out what the author means by “programming language-caused CVE”. No analysis defines a “programming language-caused CVE”. They just look at CVEs and look at CWEs. The author invented this term but did not define it.
I can’t figure out what the author means by “of the subset of security CVEs that can be addressed by programming language safety”. First, aren’t all CVEs security CVEs? Why qualify the statement? Second, the very post the author cites ([1]) states:
> If you have a very large (millions of lines of code) codebase, written in a memory-unsafe programming language (such as C or C++), you can expect at least 65% of your security vulnerabilities to be caused by memory unsafety.
The figure is unqualified. But the author adds multiple qualifications. Why?
Proofs are good evidence that pure functions are easier to reason about. Many proof assistants (Coq, Lean, F*) use the Calculus of Inductive Constructions, a language that only has pure, total functions, as their theoretical foundation. The fact that state of the art tools to reason about programs use pure functions is a a pretty good hint that pure functions are a good tool to reason about behavior. At least, they're the best way we have so far.
This is because of referential transparency. If I see `f n` in a language with pure functions, I can simply lookup the definition of `f` and copy/paste it in the call site with all occurrences of `f`'s parameter replaced with `n`. I can simplify the function as far as possible. Not so in an imperative language. There could be global variables whose state matters. There could be aliasing that changes the behavior of `f`. To actually understand what the imperative version of `f` does, I have to trace the execution of `f`. In the worst case, __every time__ I use `f` I must repeat this work.
And if I go to a flat earth conference, I will find that they produce lots of “proof” for flat earth.
I don’t really accept “this group of people who’s heads are super far up the ‘pure functions’ ass choose purity for their solutions” as “evidence” that purity is better.
I’m not saying that purity is bad by any stretch. I just consider it a tool that is occasionally useful. For methods modifying internal state, I think you’ll have a hard time with the assertion that “purity is easier to reason about”.
>For methods modifying internal state, I think you’ll have a hard time with the assertion that “purity is easier to reason about”.
Modeling the method that modifies internal state as a function from old state to new state is the simplest way to accomplish this goal. I.e., preconditions and postconditions.
It doesn’t matter if students know what to expect.
An oral exam isn’t the same as reading a written exam out loud. There are a set of learning outcomes defined in the syllabus. The examiner uses the learning outcomes to ask probing questions that start a conversation - emphasis on conversation. A conversation can’t be faked. A simple follow up question can reveal a student only has a shallow understanding of material. Or, the student could remember everything they’ve been told, but fail to make connections between learning outcomes. You can’t cram for an oral exam. You have to digest the course material and think about what things mean in context.
After all, students know what to expect on standardized tests. Some still do better than others :-).
It can. An examiner questions are gonna be parsed via covert Voice recognition,processed by ai and replies played back via a e. g. small headset(a magnet inside an ear cavity, accompanied by an induction loop wire around the neck(works fine,google "микронаушник магнит" in russian, it's popular here) , and it's not as hard as you may think.
Some people already ``adjust'' their names for a Western audience, and others don't. So, given the <name 1> <name 2> of a Japanese person, I don't know if <name 1> is the family name or given name. Some people already make the adjustment so that Westerners get it right, some don't.
I don't care either way -- I'd just prefer things be consistent so that when I recognize a Japanese name I can call them by the proper name faster :).
Modern Chinese family names are often one syllable, but China and Japan do not share language nor have much ancestral connections, and Japanese family names are usually not one syllable.
You could look up the n most common Japanese family names (myoji ,名字). You probably would get a list like Yamaha, Suzuki, Watanabe, Tanaka, etc. Of course, this wouldn't cover every case, but it might help. 頑張ってください!
Even dynamically typed languages like CL have type systems. However, CL implementations are under no obligation to statically check that a program is type-safe. Those checks are deferred until run time — hence the author’s complaints.
I think AI could be commoditized. Look at DeepSeek stealing OpenAI's model. Look at the competitive performance between Claude, ChatGPT, Grok, and Gemini. Look at open weight models, like Llama.
Commoditized AI need used via a device. The post argues that other devices, like watches or smart glasses, could be better posed to use AI. But...your point stands. Given Apple's success with hardware, I wouldn't bet against them making competitive wearables.
Hardware is hard. It's expensive to get wrong. It seems like a hardware company would be better positioned to build hardware than an AI company. Especially when you can steal the AI company's model.
Supply chains, battery optimization, etc. are all hard-won battles. But AI companies have had their models stolen in months.
If OpenAI really believed models would remain differentiated then why venture into hardware at all?