Hacker News new | past | comments | ask | show | jobs | submit login

It's not either or, you can combine the two. I've worked on a system that did real time audio mixing for 10000s of concurrent connections, utilizing >50 cores, mostly with one thread each. Each thread had thread-local data, was receiving/sending audio packets to hundreds/thousands of different IP addresses just fine without worrying about mutexes at all. Try that with tens of thousands of actual OS threads and the associated scheduling overhead.

Having data affinity to cores is also great for cache hit rates.

Here is part of the C++ runtime this is based on: https://github.com/goto-opensource/asyncly. I was the principal author of it when it was created (before it was open sourced).




> Each thread had thread-local data, was receiving/sending audio packets to hundreds/thousands of different IP addresses just fine without worrying about mutexes at all.

it doesn't sound they really sharing data with each other, it looks like your logic is well lineralizable and data localized, and you can't implement access to some global hashmap in that way for example.

> Try that with tens of thousands of actual OS threads and the associated scheduling overhead.

I run this(10k threads blocked by DB access) in prod and it works fine for my needs. There are lots of statements in internet about overhead, but not much benchmarks how large this overhead is.

> Here is part of the C++ runtime this is based on

yeah, I need one runtime on top of another runtime, with unknown quality, support, longevity and number of gotchas.


> it doesn't sound they really sharing data with each other, it looks like your logic is well lineralizable and data localized, and you can't implement access to some global hashmap in that way for example.

Yes, because data can have thread affinity. Data doesn't need to be shared by _all _ connections, just by a few hundred/thousand. This enables connections to be scheduled to run on the same thread so that they can share data without synchronization.

> I run this(10k threads blocked by DB access) in prod and it works fine for my needs. There are lots of statements in internet about overhead, but not much benchmarks how large this overhead is.

The underlying problem is old and well researched: https://en.wikipedia.org/wiki/C10k_problem


> Data doesn't need to be shared by _all _ connections,

data doesn't need to be shared in your specific case, not in general.

> The underlying problem is old and well researched: https://en.wikipedia.org/wiki/C10k_problem

wiki page doesn't mean it is well researched, where can I see results of overhead measurements on modern hardware?


> wiki page doesn't mean it is well researched, where can I see results of overhead measurements on modern hardware?

Here is how this works: at the bottom of the wiki page, there are referenced papers. They contain measurements in modern hardware. You read those, then perhaps go to Google and see if there is any newer research that cites those papers.

If you don't feel like reading papers, HN has a search bar at the bottom that yields a wealth of results: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...


I spent short time looking and found that most papers are very outdated or don't have relevant info (no measurements of overhead) on that page. Give specific paper and citation or we finish this discussion.


https://blog.erratasec.com/2013/02/multi-core-scaling-its-no...

Maybe you should just take a college computer architecture course along the lines of Hennessy/Patterson. This is nothing new, I learned much of this in college 15 years ago. The problem has only gotten worse since then, computers have not become more single threaded.


my reading is that graphs in that post are just fantasized by author to demonstrate his idea and not backed by any benchmarks or measurements, at least I don't see any links on code in article and no mentions what logic he actually tried to run, how many threads/connections he spawned.

> The problem has only gotten worse since then, computers have not become more single threaded.

Computers are now can handle 10k blocking connections with ease.


> yeah, I need one runtime on top of another runtime, with unknown quality, support, longevity and number of gotchas.

It's a library. It solved our problems at the time, years ago. It's still used in production and piping billions of audio minutes per month through it. You don't have to use it, I merely referred to it as an example. A similar library is proposed to be included in C++23: https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p23...


> It's still used in production and piping billions of audio minutes per month through it.

there are tons of overengineered unmaintainable code in prod, it doesn't mean I need to follow them as example without much justification.

> A similar library is proposed to be included in C++23

hm, I went through the code example, and would prefer my current approach as a much simpler and readable.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: