> … but it seems like the judge simply doesn't get the objections. And the reasoning is really strange
The full order is linked in the article: https://cdn.arstechnica.net/wp-content/uploads/2025/06/NYT-v.... If you read that it becomes more clear: The person who complained here filed a specific "motion to intervene" which has a strict set of requirements. These requirements were not met. IANAL, but it doesn't seem too strange to me here.
> Also, rejecting something out of hand simply because a lawyer didn't draft it seems really antithetical to what a judge should be doing. There is no requirement for a lawyer to be utilized.
This is also mentioned in the order: An individual have the right to represent themselves, but a corporation does not. This was filed by a corporation initially. The judge did exactly what a judge what supposed to do: Interpret the law as written.
In most situations panicking and deferencing a null pointer leads to the exact same scenario: The binary crashes. You can unwind and catch panics in Rust, but I’m not sure if that would have helped in this scenario as it might have immediately went directly into the fault code again.
However, I would assume that the presence of an «unwrap» would have been caught in code review, whereas it’s much harder to be aware of which pointers can be null in Java/C++.
> In most situations panicking and deferencing a null pointer leads to the exact same scenario: The binary crashes.
This is a false and dangerous misconception people seem to get wrong a lot of the time. There's no guarantee that's the case, especially when working in C where pointers are often subscripted.
It's the common behavior that a trap occurs but nothing dictates that's actually what will happen.
> In this article, we study the convergence of datalog when it is interpreted over an arbitrary semiring. We consider an ordered semiring, define the semantics of a datalog program as a least fixpoint in this semiring, and study the number of steps required to reach that fixpoint, if ever. We identify algebraic properties of the semiring that correspond to certain convergence properties of datalog programs. Finally, we describe a class of ordered semirings on which one can use the semi-naïve evaluation algorithm on any datalog program.
It’s quite neat since this allows them to represent linear regression, gradient decent, shortest path (APSP) within a very similar framework as regular Datalog.
They have a whole section on the necessary condition for convergence (i.e. termination).
Boomph is a Rust re-implementation of BBHash which is included (and dominated by three other implementations). AFAIK there’s no reason to think it would perform any better than BBHash.
Addition: BBHash, in turn, is a re-implementation of FiPHa (perfect hashing through fingerprinting). There are quite many re-implementations of FiPHa: BBHash, Boomph, FiPS, FMPH, etc. As shown in the survey, BBHash is by far the slowest. Even though it implements exactly the same algorithm, FMPH is much faster. Its paper [1] also compares to Boomph. The beauty of the fingerprinting technique is that it is super simple. That's probably the reason why there are so many implementations of it.
We have a habit of taking our eye off of old problems by trying to juggle several new ones. By the time someone notices that we have a problem, the dogleg in the graphs where the n² solution stopped fitting into the CPU cache has been obvious for months but nobody was looking, and we dance around that fact that we had time to take a reasonable approach to fix the problem if we had noticed it when it became measurable, by adding anxiety to the cleanup work.
And then someone learns from this experience, gets the bright idea to set up an alert for such things, but the alert doesn’t factor in things like customer base growth or feature creep slowly pushing up the expected runtime. Eventually organic load gets close to the alarm and then the fucking thing goes off on a three day weekend (why is it always a long weekend or just before one?) and then we wage war on alarm overreach and the whole cycle repeats itself.
We like to think of ourselves as blazing trails in the wilderness but most of the time we are doing laps around the parking lot.
13% mentioned that error handling was the biggest challenge with using Go. This was not a multiple choice question, but you had to pick one answer. We don't know how many people would consider it challenging. (This is typically why you have a 1-10 scale per choice.)
> It marks a piece of code to be treated as data (to be sent to the client).
> This means that whoever imports onClick from the backend code won’t get an actual onClick function—instead, they’ll get '/js/chunk123.js#onClick' or something like that identifying how to load this module. It gives you code-as-data. Eventually this code will make it to the client (as a <script>) and be evaluated there.
The point of quoting in Lisp is that you get the code actually as data: You can introspect it ("how many string literals are there in here"), rewrite it ("unroll every loop once"), serialize it, store it in a database. And more importantly: The code is structured in the same way as any data in a regular program (lists). It's not hard for the developer to do any of these things.
If I get back '/js/chunk123.js#onClick' I simply have a reference, which I can use to invoke it remotely. The code appears to still be sent as bundled JavaScript, evaluated as usual, and then linked together with the reference. There's a small connection to code-as-data in the sense that you need to be able serialize code in order to share it between a server/client, but other than that I don't really see much of a connection.
Sure, and the article does acknowledge that directly:
>Of course, this is a lot less powerful than quoting because the evaluation strategies are being prescribed by React, and there’s no kind of metaprogramming like transforming the code itself. So maybe it’s still a stretch.
To me, the connection is that I'm able to treat a reference to serialized code as a first-class primitive that can be passed around the server code (and composed via component tag composition), but indeed it's not full-on metaprogramming.
> People who "figured out" Zig tend to be fiercely loyal to the language in a similar way as Rust evangelists to Rust.
This is very much not productive and you’re now part of spreding this narrative. There’s plenty of people out there who has «figured out» and appreciate both Zig and Rust without becoming attached to it.
I’m interested in communities which looks towards other languages for inspiration and admiration, not judgements and alienation.
For what it's worth, I found the Zig community on the biggest Zig discord very nice and welcoming. But that said, there is a lot of "you have to understand Zig" sentiment. Also, there is a lot of "I discovered Zig and it's finally showing me how to program" echoed as well.
I don't find this an unfair judgment but rather an observation.
If you feel that this is an optimal programming language that gives more robustness and clarity than other languages, then it's natural to be preachy about it.
This is similar to Rust being sold as safe language, where similarly the proponents of Rust feel that the advantages of Rust need to be spread.
As a contrast, Odin focuses on "joy of programming" as its main goal, and the author does not make any claims of the language having killer features to choose it over something else.
However, it seems to be successful in that new users tend to remark how pleasant and fun it is to program in the language.
You're kinda proving my point here by using such loaded terms. You've chosen the term "preachy" (a negative word) to describe people who are excited about advancements in programming languages (e.g. borrow checker, powerful type system, comptime, alignment as a part of type system). You've chosen to not mention that Rust keeps being the "Most loved programming language" (according to Stack Overflow); isn't this is a sign that people find it joyful?
> Also, there is a lot of "I discovered Zig and it's finally showing me how to program" echoed as well.
So, did you try Zig? How did you find it? Did it show you a new way to program? Or were you already aware of this way? Or do you think it's not a good way? What did you find interesting? What features did you steal because they were good? What do you think is overrated? These are the questions I'm interested in from other programming language designers!
> As a contrast, Odin focuses on "joy of programming" as its main goal, and the author does not make any claims of the language having killer features to choose it over something else.
And that's a fair thing to say! You can say that C3 is just a slightly better C and doesn't have any other killer feature. I'm just not sure why you need to talk negatively about other languages.
I tried Zig in 2017-2018 span (and as part of research I've read quite a bit of Zig over the years). To me the language had some details not previously tried out: special operators for wrapping ops, error value based error returns and pervasive NPOT types. But overall it felt unnecessarily verbose with what I feel were unnecessary changes to established syntax in standard constructs such as "for" and "while". For this reason I started to contribute to C2 instead.
However, my impression was obviously coloured by being around 45 at the time and I was used to program in many different programming languages. Plus I grew up with BASIC, Pascal and C.
There's going to be quite a different experience for someone coming from Go/JS/Java and venturing into low level programming for the first time!
That is not to say that all of the people being enthusiastic about Zig is coming from those particular languages, but I think that C is considered a scary language for many people, so C alternatives tend to attract people from higher level languages to a higher degree than C itself.
When I eventually started on C3, I incorporated some features from Zig. I ended up removing all of them as I found them to be problematic: untyped literals combined with implicit widening, unsigned error on overflow, special arithmetic operators for wrap, saturation.
However, I am very grateful that Zig explored these designs.
From Odin I ended up including its array programming, but for vectors only. I also adopted Odin's use of distinct types.
But most of the C3 features are GCC C extensions plus ASTEC inspired macros.
You can find a lot of the "showing me how to program" sentiment is common among people who learn Lisp/Clojure, Haskell, Erlang/Elixir, APL (oh, I mean, Numpy and Spark), and any other language that significantly differs from what you're used to. In the same vein, C is often a revelation for those who cut their teeth tackling JS and Python.
Indeed, Zig has interesting features that make you think in ways you won't make when using C, like an ability to offload large amount of computation to comptime code, or using different allocators at different times (super simple arena allocation per a game frame, for instance).
"A language that's not changing the way you think about programming is not worth knowing."
This is a good point about narrative spreading, in addition to marketing. People can become evangelized by their use of certain languages or by comments from certain language creators, then go on to attack others for using or even just wanting to try other languages. This shouldn't be what HN is about. It makes it look like HN has a language approval list.
As for both C3 and Odin, they've been around for many years, yet don't even have a Wikipedia page and have relatively low numbers on GitHub. That comes across as more time spent pushing or hyping on HN, than those languages being considered a truly viable alternative by the general public. Just weird, because you would think it should be the other way around.
Did you know that Wikipedia editors will aggressively remove Wiki entries about less known languages. There are already several wiki articles on Odin by various authors that have been removed over the years.
Talking about GitHub numbers, we can look at VLang, which had an astronomical trajectory initially due to overpromising and selling a language that would solve long standing issues such as no manual memory management but no GC needed etc.
Such viral popularity creates a different trajectory from organically growing word of mouth such as in the Odin case.
Vlang also has a Wikipedia page.
Is this then proof that it is a viable alternative to the general public? This is what you argue.
Wikipedia and their processes are independent to any language. It means that if Odin or other languages were removed, they were likely judged as not meeting the standard or not popular enough. That the Odin language is so old (around 9 years), and still not on it, is indicative of it not being as popular as various people are hoping.
The use of negative catch phrases and envious put downs by competitors of Vlang has no bearing on the Wikipedia process. They will not care about any competition or politics among programming languages. The language either meets their standard and proves its case, that it should have a page, or not. Just like Zig, Nim, Rust, etc... have done.
Odin isn’t as well known as Zig, true. But what you seemed to argue was that this was a deliberate choice by the Odin community: to look for hype on Hacker News rather than doing the leg work of getting a Wikipedia article about the language.
This idea is what I criticize.
Not to mention that Wikipedia’s notability criteria is increasingly harder to live up to as tech news gets more and more decentralized.
It is not enough for notability that the Odin author is interviewed in various podcasts. It’s not enough for the language to be used in a leading visual effects tool and so on. These are not valid references for Wikipedia.
So how did Vlang achieve it? By commissioning books on the language(!). Once there was a book on V (nevermind no one bought it) it fulfilled the Wikipedia criteria. There are discussions about this on various V forums.
So let go of the idea that Wikipedia is proving anything.
First, for clarification, V developers can not control who writes books on the language. To say otherwise, looks to be about projection, as to what competitors have done or plan to do.
The first book written about V (5 years ago), looks to have been a total surprise to the community and its creator, because it was written in Japanese[1]. The 2nd book written about V has clearly sold well and received very good reviews[2]. It's author had no connection with V development and there are interviews about him. A 3rd book[3], which was for academic circles, is not primarily about V. It, however, uses the programming language for random number generation and explanations on the subject.
The point I'm getting at, is marketing one's self on HN or podcasts as a viable alternative versus being an actual viable alternative used by the general public. It's one thing to act like or say your popular, it's another thing to be popular. There is a qualitative difference, that is even picked up on by Wikipedia, which is arguably why Odin or C3 don't have a page or the numbers on GitHub.
Despite any misdirected anger or envy, that problem has nothing to do with other languages like V or even Zig. It's up to fans and interested third parties of that language to show and report widespread usage.
Have to disagree. Some type of solid metric has to be used, beyond claims by fans or language creators bombarding multiple social media sites.
First, "future prospect", is a claim almost any language can try to make. Unless it is a language created by a well known corporation (Carbon for example) or famous programmer (Jai or Mojo), such claims lack a foundation. A new language can really only make the argument of truly being a future prospect, if it comes from something already successful or famous.
Thus, for most newer languages, GitHub is a valid metric. Not just stars, but the number of contributors and activity associated with the repo. Other things like books on Amazon by third parties or articles about the language in well known magazines, would clearly count too. These things are measurables, beyond just hype.
Unfortunately those things often come down to a chicken and egg scenario. Popular things get more popular, because they have demand, people write articles and then people visit the repo, write books etc they are strongly linked.
So which language do you use then? I've never seen a language that doesn't have bad things to say about other languages. Zig bdfl himself accused vlang of committing fraud a while back.
Every language designer takes things they like about some languages and leaves things they don't like.
> I've never seen a language that doesn't have bad things to say about other languages.
That's why I said "communities" and not "languages". Every programming language has a wide set of people who use it. You can always find some people who constantly say bad things about other languages. You can also find people who are interested in the different trade offs of the language. I use languages which are technically interesting, and then I engage with the parts of the community which are interested in finding the best solutions to actual problems.
And guess what? Most of the Zig and Rust community are, in my experience, way more focused on solving real problems than to push their language at all cost. Both /r/rust and /r/zig will often recommend different languages. I mean, this was the most upvoted comment around how to convince someone's boss to use Rust over Python: https://old.reddit.com/r/rust/comments/14a7vgo/how_to_convin....
> Zig bdfl himself accused vlang of committing fraud a while back.
I think there's a difference between a critical generalization of a community and the mindset behind it and how that relates to the language (without weighing in on how legitimate that criticism is), and a direct accusation that one individual did a specific bad thing.
> Zig bdfl himself accused vlang of committing fraud a while back.
That was truly foul. On top of that, begged readers to give their money to Zig. Clearly some have no limits on what to say and do against other languages or to sell their language.
That's why whatever bad things a creator or evangelist says about another language, people shouldn't just swallow, and instead take with a grain of salt and some skepticism.
Is it because, as the leader of a language, he shouldn't be making "attacks" against other languages? Because, as far as V being a fraud, he was 100% correct.
> Is it because, as the leader of a language, he shouldn't be making "attacks" against other languages?
Actually, yes. Not only from the angle of common decency or adhering to a code of conduct, but as a matter of professionalism and setting the example for followers.
> as far as V being a fraud...
That is a provably false claim from competitors, who should not be engaging in such activity.
Paying supporters[1][2][3] (ylluminate, gompertz, etc...) of the V language have even gone on record at HN, to clearly state such competitor or evangelist claims are false, and that they are happy with the language.
Not only can such competitor generated claims be seen as false, through direct V supporter refutation, but by the visible progress of the project as a whole. Over the years, the V language repo continually amasses thousands of stars and hundreds of contributors, that can be plainly seen on GitHub. It is a significantly large and successful project. To pretend or argue otherwise, is very disingenuous. People are there, because they like using Vlang[4].
Yeah that's a half-assed apology. Yesterday's post might have unintentionally sent a lot of heads spinning, "this is not the way to do this" is not an apology, it's a double-down.
I understand there is a potential heap overflow with atop, thanks for letting everyone know; but you're also letting the people capable of taking advantage of it aware that there is possibility to do this in the wild. Due process is let the developers fix it and then tell everyone to upgrade.
Anyway a C process that runs as root for lifestyle purposes (e.g. not a critical service) is a big no-no. And I say this as I like to write C, I love C. But I wouldn't push my C code on anyone else's computer, especially requesting root access. I'm not that good.
(You are writing here under the name "Niten" which I am going to guess is not your full name. I am writing under the name "gjm11" which is also not my full name, though as it happens my full name is readily discoverable via my HN profile while yours is not. Obviously neither of us actually believes that there is something wrong with not stating your full name explicitly every time you write something.)
> Obviously neither of us actually believes that there is something wrong with not stating your full name explicitly every time you write something.
Who knows? Maybe "Niten" does believe that and has a massive shame and public-embarrassment kink. There's nothing wrong with that; that sort of thing is totally harmless.
I'd argue that it'd be a beneficial life lesson for the people who are freaking out over a "Hey, maybe stop using 'atop'." comment to learn how to enhance their calm.
Blowing one's stack over every little thing shortens your lifespan! It's best to learn how to take friendly warnings about bad things in stride.
Yes? Imagine a bug where iMessages are leaked over Bluetooth when a user has installed an application that integrates with some watch brand. Bring this to an airport and you can steal hundreds/thousands of messages from a wide range of people. That’s widely different attack vector than targeting macOS.
That said, I don’t see why Apple can’t provide toolkit/certification that will make it safe to communicate over Bluetooth. They already have it in-place for Apple Watch.
Imagine a bug where the Apple Passwords app leaks over HTTP. Bring this to an airport and you can steal hundreds/thousands of Passwords from a wide range of people.
>The lack of encryption meant an attacker on the same Wi-Fi network as you, like at an airport or coffee shop, could redirect your browser to a look-a-like phishing site to steal your login credentials.
Should be, but BT stacks are super crap and it's hard to truly guarantee that. Pretty sure they do not currently require the highest (actually proper) security level from everyone.
Well they could require a security level for starters and require only secure pairing (the fact that we even have something besides secure pairing should make a few bells ring), but that still leaves a bunch of avenues for an external vendor to fuck up their side of the implementation.
It's a whole another system outside of Apple's control and some mutually agreed upon Bluetooth LE elliptic key does nothing to protect it in its entirety. It still leaves cryptographic mistakes, side-channels and all other vulnerabilities.
Like, what does https:// or transport encryption in general really say about the website's security to you? Not much besides transport, does it?
Now we want to expose more than notification contents over Bluetooth (LE)? Are we sure? It has to be carefully designed.
You have to trust 3rd parties at some point. Apple can make it reasonably secure and let the user decide if 3rd party accessories are worth the potential risk but that option is never exposed.
Really Apple allows HTTPS connections but the same implementation concerns apply there. The web server could publish it's private and session keys to a "status" page and leak enough to make decryption trivial
I think it'd be more honest if they say "we don't want to give users options" (for better or worse) instead of claiming it's security
This whole thread is chockful of thought-terminating cliches, and I say that as someone who grew from a waiter to a developer thanks to Apple and made a lot of these arguments.
I also worked on Android Wear's iOS app for working with iPhones.
The major problem I see now with these excuses, that I'd like to claim wasn't an issue when I was making them circa 2015-2017, is they're cargo cult (a la Apple likes making things that just work) or boogeymen (if they did anything different, a bluetooth connection would be used, unencrypted, sending all your data into the ether).
The watch has been out for 10 years.
Software is software. Where there's a will, there's a way.
It's very, very, very, hard to believe there's 0 way for Apple to ensure an encrypted connection.
Put another way, avoiding the global observations: If it's impossible, why allow watches to be paired at all?
extreme handwaving hand-me-down 6 year old iBook(?) circa 2005 => wow software can be beautiful => hacking on AppleScript => hacking on iPhone OS 1.1.4 decompiled SDK => iPhone 2 with the App Store(tm) => shit, I can make money off this? => dropout => startup => sold it => saw what an acquisition looks like => by the grace of god herself, somehow made it through Google interviews.
(happy to detail more, like everyone, I love talking about myself :P but figured I'd start with the TL;DR, i.e. the App Store + subsequent boom happened at such a time that made it seems reasonable, years later, to dropout, and having 0 responsibility outside restaurant shifts gave me a fulcrum)
> that I'd like to claim wasn't an issue when I was making them circa 2015-2017,
Well, I wouldn't say that the standards for (software) security were anywhere near as high as they are now. It makes sense that our requirements for things change.
> It's very, very, very, hard to believe there's 0 way for Apple to ensure an encrypted connection.
Sure there are ways, but without regulation I struggle to see why should/would Apple ever bother. Nor do I think that a forced way would be held to the same standards as the rest.
> Put another way, avoiding the global observations: If it's impossible, why allow watches to be paired at all?
Yes, but they can actually know it fulfils some security criteria of theirs. Doesn't have fundamentally broken cryptography hidden somewhere, doesn't leak its keys, all that bare minimum is really difficult to guarantee with external unknown implementations.
Might be, but I meant the wearables' stacks. Fundamentally Apple can't ensure much more than a vaguely transport encrypted connection to such a peripheral.
Apple can't (trivially) detect if there's a fatal flaw in the way the other side derives their secrets for example. They can't know if the device doesn't have a backdoor characteristic/API that gives access to the key material. They can't know if that proprietary stack can't be exploited in n+1 ways because it has been written by an underpaid intern.
But if Apple gave access to everything over BLE they would be expected to. At least by most Apple users. Be it a good or a bad thing. It's a rather enormous access vector, if they'd provide feature parity(-ish) with Watch.
Much more sensible would be to make such features available to apps (and by proxy, wearables) with entitlements. But even then it can be just as insecure, just by proxy.
No, the vendor's BT stack would be responsible for broadcasting any responses back to the device -- like, in the article, "send text messages, or perform actions on notifications (like dismissing, muting, replying)"
Do you actually have anything conducive to say, anything specific you'd like to argue against?
Encryption is optional, there are four security levels for BLE, multiple pairing methods, privacy extensions, there are so many ways to mess things up.
The full order is linked in the article: https://cdn.arstechnica.net/wp-content/uploads/2025/06/NYT-v.... If you read that it becomes more clear: The person who complained here filed a specific "motion to intervene" which has a strict set of requirements. These requirements were not met. IANAL, but it doesn't seem too strange to me here.
> Also, rejecting something out of hand simply because a lawyer didn't draft it seems really antithetical to what a judge should be doing. There is no requirement for a lawyer to be utilized.
This is also mentioned in the order: An individual have the right to represent themselves, but a corporation does not. This was filed by a corporation initially. The judge did exactly what a judge what supposed to do: Interpret the law as written.
reply