Hacker News new | past | comments | ask | show | jobs | submit | spinningslate's comments login

As a dyed-in-the-wool print debugging advocate, and a Gleam-curious Erlang/BEAM enthusiast, this is very interesting for me.

Thanks for all your work, great to see how well the language and tooling are maturing.


print debugging is the best debugging <3 Thank you for the kind comment!!


Nah it's just the easiest and most reliable way. Usually anyway; sometimes you have extreme timing or space constraints and can't even use that. On microcontrollers I sometimes have to resort to GPIO debug outputs and I've worked on USB audio drivers where printf isn't an option.


hello fellow print debugging enjoyer, rejoice!


Indeed, for me, the IDE debuggers became useless when we started writing multi threaded programs.

Printf is the only way to reliably debug multi threaded programs.


> Those who care very deeply about very tight privacy

> that has enough privacy to be sustainable

These are the key phrases. Mozilla has hitched its wagon to advertising. Behind all the bluster over last week, the underlying direction is clear. They bought Anonym [0] and Ajit Varma, the new VP of Product for Firefox and source of the updates, is ex-Meta. It's reasonable to assume that he's there, in part, because of advertising expertise.

Some will see Anonym's "privacy-powered advertising" as "enough privacy" and the only viable way to sustain Firefox without Google's annual cash injection.

Others won't buy that, believing that a browser can be built without relying on advertising. Ladybird is taking this approach - so we'll find out.

> If Firefox’s market share dips any lower website makers won’t support it

This is the risk the exec team must know they've taken. Specifically: what proportion of the current Firefox user base exists because of the historic pro-privacy stance, and what percentage of that will leave because of the advertising-based future?

[0] https://www.anonymco.com/

--

EDIT: addedd missing reference


> Others won't buy that, believing that a browser can be built without relying on advertising. Ladybird is taking this approach - so we'll find out.

I'm afraid that we'll find out indeed and end up with no Ladybird and no Firefox either.


What you do will, in part, depend on how you feel about 2 things:

1. Mozilla is now an advertising business - see e.g. links in this El Reg post [0].

2. How you feel about the alternatives.

Behind the PR bluffing of the last few days, #1 is clear. Mozilla has hitched the wagon to advertising.

There's unlikely to be a single good answer for #2. All the alternatives have compromises: Vivaldi is chrome-based and has some closed source code; Brave has crypto and Eich's political views (and also chrome-based; the various firefox forks (LibreWolf, PaleMoon, Waterfox, ...) all have questions over their sustainability.

Perhaps the most promising is Ladybird, but it's a good way off yet.

Let's hope we're near the botton of the enshitification curve and there are positives on the horizon somewhere.

[0]: https://www.theregister.com/2025/03/02/mozilla_introduces_te...


"When you upload or input information through Firefox, you hereby grant us a nonexclusive, royalty-free, worldwide license to use that information to help you navigate, experience, and interact with online content as you indicate with your use of Firefox."


The US firearm mortality rate was 5x that of the nearest high-income countries in 2019 [0]. The US had 120 firearms per 100 people in 2018 with 80% of all homicides being gun-related [1].

Those statistics may not be wholly attributable to differences in gun laws but it seems a stretch to suggest they're unrelated.

[0] https://www.linkedin.com/pulse/us-midst-public-health-crisis...

[1] https://www.bbc.co.uk/news/world-us-canada-41488081


thank you for writing this.

I cut my teeth on OS/2 in the early 90s, where using threads and processes to handle concurrent tasks was the recommended programming model. It was well-supported by the OS, with a comprehensive API for process/thread creation, deletion and inter-task communication. It was a very clear mental model: put each sequential sequence of operations in its own process/thread, and let the operating system deal with scheduling - including pausing tasks that were blocked on I/O.

My next encounter was Windows 3, with its event loop and cooperative multi-tasking. Whilst the new model was interesting, I was perplexed by needing to interleave my domain code with manual decisions on scheduling. It felt haphazard and unsatisfactory that the OS didn't handle scheduling for me. It made me appreciate more the benefits of OS-provided pre-emptive multi-tasking.

The contrast in models was stark. It seemed obvious that pre-emptive multi-tasking was so obviously better. And so it proved: NT bestowed it on Windows, and NeXT did the same for Mac.

Which brings us to today. I feel like I'm going through groundhog day with the renaissance of cooperative multi-tasking: promises, async/await and such. There's another topic today [0] that illustrates the challenges of attempting to performs actions concurrently in javascript. It brought back all the perplexion and haphazard scheduling decisions from my Windows 3 days.

As you note:

> Of course, context switching between different tasks is not free, and event loops have frequently been able to provide higher efficiency.

This is indeed true: having an OS or language runtime manage scheduling does incur an overhead. And, indeed, there are benchmarks [1] that can be interpreted as illustrating the performance benefits of cooperative over pre-emptive multitasking.

That may be true in isolation, but it inevitably places scheduling burden back on the application developer. Concurrent sequences of application domain operations - with the OS/runtime scheduling them - seems like a better division of responsibility.

[0]: https://news.ycombinator.com/item?id=42592224

[1]: https://hez2010.github.io/async-runtimes-benchmarks-2024/tak...


Did you ever used SOM?

To this day it still seems it had a much better approach to components development and related tooling, than even COM reboot as WinRT offers.


Yes! Fond memories. I put it firmly in the Betamax category: superior technology that lost out for political/marketing reasons.


for those curious about SOM OS/2 Technical Library: System Object Model Guide and Reference

https://archive.org/details/os2-2.0-som-1991


What makes me angriest about the current async propaganda... and I use the term deliberately to distinguish it from calm discussions about relative engineering tradeoffs, which is a different discussion... is the idea that it started with Node.

Somehow we collectively took all the incredible experience with cooperative multitasking gathered over literally decades prior to Node and just chucked it in the trash can and had to start over at Day Zero re-learning how to use it.

This is particularly pernicious because the major issue with async is that it scales more poorly than threads, due to the increasing design complexity and the ever-increasing chances that the various implicit requirements that each async task has for the behavior of other tasks in the system will conflict with each other. You have to build systems of a certain size before it reveals its true colors. By then it's too late to change those systems.


I would frame it a bit differently. Async scales very elegantly if and only if your entire software stack is purpose-built for async.

The mistake most people are making these days is mixing paradigms within the same thread of execution, sprinkling async throughout explicitly or implicitly synchronous architectures. There are deep architectural conflicts between synchronous and asynchronous designs, and trying to use both at the same time in the same thread is a recipe for complicated code that never quite works right.

If you are going to use async, you have to commit to it with everything that entails if you want it to work well, but most developers don't want to do that.


This is actually a major issue in the LLM wrapper space. Building things like agents (which I think are insanely overhyped and I am so out on but won’t elaborate on), usually in Python, where you are making requests that might take 1-5 seconds to complete, with dependencies between responses, you basically need to have expert level async knowledge to build anything interesting. For example, say you want two agents talking to eachother and “thinking” independently in the same single threaded Python process. You need to write your code in such a way that one agent thinking (making a multi second call to an llm) does not block the other from thinking, but at the same time when the agents talk to each other they shouldn’t talk over eachother. Now imagine you have n number of these agents in the same program, say behind an async endpoint on a FastAPI server. It gets complicated quick.


It's also unnecessary for virtually all actual systems today.

The systems that can potentially benefit from async/await are a tiny subset of what we build. The rest just don't even have the problem that async/await purports to solve, never mind if it actually manages to solve it.


> even in business code which does not want to maintain its worker pool or spool up a new BEAM process every time it needs to send two HTTP requests at the same time.

Just... no. First off, I'll bet a vanishingly small number of application developers on BEAM languages do anything to manage their own worker pools. The BEAM does it for them: it's one of its core strengths. You also imply that "spinning up a new BEAM process" has significant overhead, either computationally or cognitively. That, again, is false. Spinning up processes is intrinsic to Erlang/Elixir/Gleam and other BEAM languages. It's encouraged, and the BEAM has been refined over the years to make it fast and reliable. There are mature, robust facilities for doing so - as well as communicating among concurrent processes during their execution and/or retrieving results at termination.

You've made clear before that you believe async/await is a superior concurrency model to the BEAM process-based approach. Or, more specifically, async/await as implemented in .Net [1]. Concurrent programming is still in its infancy, and it's not yet clear which model(s) will win out. Your posts describing .Net's async approach are helpful contributions to the discussion. Statements such as that quoted above are not.

[1]: https://news.ycombinator.com/item?id=40427935

--

EDIT: fixed typo.


> You also imply that "spinning up a new BEAM process" has significant overhead, either computationally or cognitively. That, again, is false.

Spawning 1M processes with Elixir is anywhere between ~2.7 and 4 GiB with significant CPU overhead. Alternatives are much more efficient at both: https://hez2010.github.io/async-runtimes-benchmarks-2024/tak... (a revised version).

Go, for example, probably does not want you to spawn the Goroutines that are too short-lived as each one carries at least 2KiB of allocations by default which is not very kind to Go's GC (or any GC or allocator really). So it expects you to be at least somewhat modest at such concurrency patterns.

In Erlang and Elixir the process model implementation follows similar design choices. However, Elixir lets you idiomatically await task completion - its approach to concurrency is much more pleasant to use and less prone to user error. Unfortunately, the per-process cost remains very Goroutine-like and is the highest among concurrency abstractions as implemented by other languages. Unlike Go, it is also very CPU-heavy and when the high amount of processes are getting spawned and exit quickly, it takes quite a bit of overhead for the runtime to keep up.

Sure, BEAM languages are a poor fit for compute-intensive tasks without bridging to NIFs but, at least in my personal experience, the use cases for "task interleaving" are everywhere. They are just naturally missed because usually the languages do not make it terse and/or cheap - you need to go out of your way to dispatch operations concurrently or in parallel, so the vast majority doesn't bother, even after switching to languages where it is easy and encouraged.

(I'm not stating that async/await model in .NET is the superior option, I just think it has good tradeoffs for the convenience and efficiency it brings. Perhaps a more modern platform will come up with inverted approach where async tasks/calls are awaited by default without syntax noise and the cost of doing "fork" (or "go" if you will) and then "join" is the same as with letting the task continue in parallel and then awaiting it, without the downsides of virtual threads in what we have today)


Looking forward to you doing a Show HN with a TLA+ powered C# Whatsapp killer or somesuch. Maybe out-compete Cisco after that?


Agree. The remaining comments boil down to:

1. "It's not visual, it's text". Yeah, but: how many "visual" representations have no text? And there _are_ visuals in there: the depictions of state space. They include text (hard to see how they'd be useful without) but aren't solely so.

2. "Meh, verification is for well paid academics, it's not for the real world". First off, I doubt those "academics" are earning more than median sw devs, never mind those in the SV bubble. More importantly: there are well-publicised examples of formal verification being used for real-world code, see e.g. [1].

It's certainly true that verification isn't widespread. It has various barriers, from use of formal maths theory and presentation to the compute load arising from combinatorial explosion of the state space. Even if you don't formally verify, understanding the state space size and non-deterministic path execution of concurrent code is fundamentally important. As Dijkstra said [2]:

> our intellectual powers are rather geared to master static relations and that our powers to visualise processes evolving in time are relatively poorly developed. For that reason we should do (as wise programmers aware of our limitations) our utmost to shorten the conceptual gap between the static process and the dynamic program, to make the correspondence between the program (spread out in space) and the process (spread out in time) as trivial as possible.

He was talking about sequential programming: specifically, motivating the use of structured programming. It's equally applicable to concurrent programming though.

[1]: https://github.com/awslabs/aws-lc-verification

[2]: https://homepages.cwi.nl/~storm/teaching/reader/Dijkstra68.p...


Yeah, but: how many "visual" representations have no text?

What does this mean and how does it excuse a 'visual' article being all text?

I doubt those "academics" are earning more than median sw devs

Do you think the point might have been more about the pragmatism of formal verification?


indeed, good description. Noting that vue etc. are MVC in the browser, with (typically) calls to json http endpoints on the back end to read/write data.

JQuery was born in a time of server-side rendering (SSR) where, in essence, MVC happened in the back end and it shipped html to the browser. In that model, the browser is essentially a terminal that renders output formed in the back end. JQuery was one of the early libraries to promote behaviour in the browser instead of "browser as terminal".

It feels like there's bit of a swing back to the SSR model, e.g. with the growing popularity of htmx [0] though there are still many strong proponents of the MVC-in-browser approach.

[0] https://htmx.org/


    > JQuery was born in a time of server-side rendering (SSR) where, in essence, MVC happened in the back end and it shipped html to the browser.
Yeah, I think this is nail on the head. It was also a time of smaller apps and smaller teams so it was easier to manage the state even with direct mutation on the DOM tree.

With bigger and more complex apps, you needed multiple teams and now it's not possible to really effectively track the state of the DOM if multiple teams could be manipulating it from different components.

You can see that shadow DOM and web components is one way of thinking about this problem. Separating the render from the state (a la Vue, React, Angular et al) is another.


Indeed. OTOH, "focus your main code on the happy path, and put your error handling in the supervisor tree" is unfortunately a bit less pithy.

Shades of "boring technology"[0] which might better be described as "choose technology that is a good fit for the problem, proven, reliable and proportionate. Spend your innovation tokens sparingly and wisely".

[0]: https://boringtechnology.club/


I'm personally caught between my attachment to the "boring technology" philosophy and my desire to try Elixir, which seems like exactly the kind of obscure and exotic technology that it rejects.


I actually think elixir is the perfect boring technology now. The language is matured, and is not changing much. There will be a big addition at some point with the new type system, but other than that the language is pretty much done. Underneath elixir is erlang, which is extremely mature and highly boring in the good way. Live view might be the only exception, but even that is now becoming boring as it has gotten stable. If you are a believer in boring, you should definitely give elixir a try


Erlang is old, reliable tech that used to scare people once upon a very distant time for

- running on a VM (now boringly normal)

- using a pure-functional language with immutable terms (the pure-functional language fad has since then come and gone and this is now archaic and passé if anything)

But languages only get stereotyped once. At any rate, it's pretty boring and old.


Already said by two sibling comments in all the detail I would have, but I wanted third it: as a subscriber to boring technology myself and actually never been one to jump on new shiny tech, Elixir/Erlang is the epitome of "boring technology." Elixir does a lot on its own you can often cut out the need for a lot of other dependencies.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: