Hacker News new | past | comments | ask | show | jobs | submit | jabwd's comments login

His whole thing is making money. Whatever grift it is he'll grab it, that is all it is with persona's that pretend science is fake.


Nuh-uh! That's not his whole thing. He's also into destroying the lives of his ex wives after commiting mass adultery and getting them to kill themselves.


Part of me feels like something like Redox isn't exactly worthit. Now I hope the project succeeds in whatever goals they have, and honestly where they've gotten to is impressive; what I mean to say is that I think it would be far more interesting to imagine beyond just Unix and especially the problems of POSIX. I think the "next" thing, if that even ever materializes needs to be that, something reimagined. Computers aren't emulated PDP-11's anymore.


It seems fuchsia might me something for you


> To me it looks like that for Linux big part of R4L experiment is to specifically understand whether Rust people can convince key stakeholders that R4L is good idea and get their buy-in and that is why he doesn't attempt to force it.

This is the entire point. This has been DONE. First its "lets see if you can build a good driver", now its "ew rust". The maintainer of the DMA subsystem is showing how they're actively trying to make sure Rust doesn't make it in. .


No, it is not the entire point. No one is really doubting whether you can write a driver in Rust, C++ or Swift. The whole experiment is whether you can slowly move in to existing mature kernel subsystems without being too disruptive.


If the minority maintainers scream every time they see other languages due to their insecurities, technical inability and stubbornness, and their overreactions get a pass, it is not the fault of Rust, C++ or Swift. The source of the disturbance is not the people who are making an effort to cause as little disturbance.

Blatant NIMBYism is the problem here and you cannot reduce it by accepting everything.


In general, upstreaming code to Linux involves interacting with difficult and sometimes outright hostile people. I've certainly had my share of both with much smaller changes. IMO pushing something like R4L requires very thick skin and almost infinite amount of patience. Bitching about that won't get you far, you need to be able to either work with or around those people.


This again gets back to the main point which you keep misrepresenting. This has nothing to do with a thick skin, this is a core subsystem maintainer outright saying they won't support R4L, which means its dead.


I'm not misrepresenting anything and R4L is not dead. In fact, two ways forward where suggested right in the LKLM email thread:

- Send the series directly to Linus since there is no code that Hellwig is maintainer of is actually being changed by it and let Linus decide whether to ignore Hellwig's nack. Linus may have done so before, but likely not after marcan's public meltdown.

- Copy/paste the code to every driver that will be using it. If it becomes useful, it will cause more pressure on Hellwig down the road because people will question why every change in code that is being wrapped by this is causing a fix in 10 different copies.

People here and on Reddit who are unfamiliar with the Linux development process but are attracted to the "drama" because it involves Rust somehow keep missing it.


> No, it is not the entire point. No one is really doubting whether you can write a driver in Rust, C++ or Swift. The whole experiment is whether you can slowly move in to existing mature kernel subsystems without being too disruptive.

Which Chris did doubt, as a way to gatekeep Rust (as you misrepresented, and which is clearly visible in the LKML thread).

regardless, back to the other stuff: First point: Which is what was suggested as well in the LKML and still does not really solve the problem, which is not TECHNICAL but POLITICAL. Second point: Obvious, and wasteful, and again is thus a political move which is the entire point of this entire saga. It isn't about drama, its about the political aspect of the kernel dev being tiring and wasteful.


> Which Chris did doubt, as a way to gatekeep Rust (as you misrepresented, and which is clearly visible in the LKML thread).

Can you provide the exact quote where Hellwig is suggesting that it is impossible to write a driver in Rust? No, you can't? So who exactly is misrepresenting here?

> regardless, back to the other stuff: First point: Which is what was suggested as well in the LKML and still does not really solve the problem, which is not TECHNICAL but POLITICAL. Second point: Obvious, and wasteful, and again is thus a political move which is the entire point of this entire saga. It isn't about drama, its about the political aspect of the kernel dev being tiring and wasteful.

You are shifting the goalpost from this making R4L "dead" to the way forward being "tiring and wasteful". It doesn't look like you are arguing in a good faith so I won't participate in the discussion with you anymore.


Keep in mind that it takes at least 3 months to produce an M4, and the design has been finalized long before that. So most likely yes


This is a flawed comparison in many ways. As you might not understand IE was problematic because of its massive install base and everyone only, and only writing their websites for chrome oh wait. Typo'd there, meant IE.


SQLite performance is kinda nutty. My design of the DB at the time was probably poor (but I was 15 so cut me some slack :D) but I made an app that had to run on iPod Touches at the time, so we could use the accelerometer for physics class in school.

initially the performance was too poor, but after a bunch of reading and some changes in how I was using SQLite I got it to easily do more than 100k rows per second of insertions (the db wasn't very wide). On an old embedded device, mind you. Didn't need that much, but wowza! was my expression at the time. I've had a love for it in my heart every since.


You can decode protobuf in the same way, I've written several decoders in the past that don't rely on an external schema. There are some types you can't always decode with a 100% confidence, but then again JSON or something like it isn't strongly typed either.


protobuf doesn't have key names without the schema right?

Wouldn't it just decode to 1,type,value 2,type,value without the schema no names?

Human readable key names is a big part of what makes a self describing format useful but also contributes to bloat a format with an embedded schema in the header would help.


Sorry I never ended up replying. You are correct, in protobuf you have field numbers ( and I have to admit what I commented has been a number of years ago by now my knowledge is crusty ). Usually when doing this type of work, as I was doing, is to reverse some protocol. It didn't make me hate the format though; many protocols I had reversed in the past were vastly more complicated to decode and understand without having field names. If you ever want to melt your brain get into RTMP decoding :3


Indeed.


Calling vaping safer without there being any good evidence for that is quite a stretch. However I despise this pure resource waste. Can we just stop that instead while we investigate the effects of inhaling burning copper and plastic?


There's plenty of evidence, like most, you've just not bothered to search for it before forming your opinion.

If you're genuinely interested, I recommend starting with this report: https://www.gov.uk/government/publications/e-cigarettes-an-e...


This is honestly getting tiring. Not only is the billeting tone pointless, neither are those reports conclusive or evidence on their own. I formed my opinion over the years of the diseases that are being caused by a lot of people using these devices. Now these could be by overuse, other issues, bad mixtures, shoddy made devices, idk the list can continue. Maybe vaping is overpowered if controlled correctly. Could be. Just don't state it as a fact, when, in fact, it is not known.


It's pretty wild to call out someone's source as inconclusive but not link your own.

Not surprising though.


Out of curiosity, can you list the diseases you mentioned?


We would need sufficient evidence to conclude it's as dangerous as smoking, which I'm not sure we have.

Also, we would need sufficient evidence to conclude that they all have the user inhale burning copper or plastic, which I'm not sure we have.


No, because that takes time, but that is my entire point. Claiming it is not as dangerous without actually knowing that is just, weird :/


It makes intuitive sense: all other things being equal, inhaling combustion products is worse than not inhaling combustion products. Thus, nicotine with combustion products is worse than nicotine without combustion products. We would need sufficient evidence to veer away from this data.


idk my xbox controller works on macOS, doesn't on Linux. Same bluetooth chipset (Magic of multiple drives and hackintoshing some crap). Idk what USB thing you're talking about but you surely seem to be capable of providing some actual useful info for people like me who would actually like to understand what you're talking about?


Except that the GC makes it exactly not viable for games and its one of the biggest problems Unity devs run into. I agree it's a great language, but its not a do it all.


Unity has literally the worst implementation of C# out there right now. Not only is it running Mono instead of .NET (Core) but it's also not even using Mono's generational GC (SGen). They have been working on switching from Mono to .NET for years now because Mono isn't being updated to support newer C# versions but it will also be a significant performance boost, according to one of the Unity developers in this area [1].

IL2CPP, Unity's C# to C++ compiler, does not help for any of this. It just allows Unity to support platforms where JIT is not allowed or possible. The GC is the same if using Mono or IL2CPP. The performance of code is also roughly identical to Mono on average, which may be surprising, but if you inspect the generated code you'll see why [2].

[1] https://xoofx.github.io/blog/2018/04/06/porting-unity-to-cor... [2] https://www.jacksondunstan.com/articles/4702 (many good articles about IL2CPP on this site)


I believe Unity switched to net core last year


They did not - it is still a work in progress with no announced target release date. They also have no current plans to upgrade the GC being used by IL2CPP (their C# AOT compiler).

https://discussions.unity.com/t/coreclr-and-net-modernizatio...


I could argue the opposite - GC makes it more viable for games. "GC is bad" misses too much nuance. It goes like this: developer very quickly and productively gets minimum viable game going using naive C# code. Management and investors are happy with speed of progress. Developers see frame rate stutters, they learn about hot path profiling, gen0/1/2/3 GC & how to keep GC extremely fast, stackalloc, array pooling, Span<T>, native alloc; progressively enhancing quickly until there are no problems. These advanced concepts are quick and low risk to use, and in the case of many of the advanced concepts; what you would be doing in other languages anyway.


The only reason we might see FPS drop in games, is not because C# and its GC. It's mostly because the poor usage of the graphics pipeline and the lack of optimization. As a former game developer I had to do a lot of optimization so our games run nicely on mobile phones with modest hardware.

C# it's plenty fast for game programming.


That entirely depends on the game. Recent example is Risk of Rain 2, which had frequent hitches caused by the C# garbage collector. Someone made a mod to fix this by delaying the garbage collection until the next load-screen — in other words, controlled memory leakage.

The developers of Risk of Rain 2 were undoubtedly aware of the hitches, but it interfered with their vision of the game, and affected users were left with a degraded experience.

It's worth mentioning that when game developers scope of the features of their game, available tech informs the feature-set. Faster languages thus enable a wider feature-set.


> It's worth mentioning that when game developers scope of the features of their game, available tech informs the feature-set. Faster languages thus enable a wider feature-set.

This is true, but developer productivity also informs the feature set.

A game could support all possible features if written carefully in bare metal C. But it would take two decades to finish and the company would go out of business.

Game developers are always navigating the complex boundary around "How quickly can I ship the features I want with acceptable performance?"

Given that hardware is getting faster and human brains are not, I expect that over time higher level languages become a better fit for games. I think C# (and other statically typed GC languages) are a good balance right now between good enough runtime performance and better developer velocity than C++.


> frequent hitches caused by the C# garbage collector

They probably create too much garbage. It’s equally easy to slow down C++ code with too many malloc/free functions called by the standard library collections and smart pointers.

The solution is the same for both languages: allocate memory in large blocks, implement object pools and/or arena allocators on top of these blocks.

Neither C++ nor C# standard libraries have much support for that design pattern. In both languages, it’s something programmers have to implement themselves. I did things like that multiple time in both languages. I found that, when necessary, it’s not terribly hard to implement that in either C++ or C#.


> In both languages, it’s something programmers have to implement themselves.

I think this is where the difference between these languages and rust shines - Rust seems to make these things explicit, C++/C# hides behind compiler warnings.

Some things you can't do as a result in Rust, but really if the rust community cares it could port those features (make an always stack type type, e.g.).

Code base velocity is important to consider in addition to dev velocity, if the code needs to be significantly altered to support a concept it swept under the rug e.g. object pools/memory arenas, then that feature is less likely to be used and harder to implement later on.

As you say, it's not hard to do or a difficult concept to grasp, once a dev knows about them, but making things explicit is why we use strongly typed languages in the first place...


The GC that Unity is using is extremely bad by today's standards. C# everywhere else has a significantly better GC.

In this game's case though they possibly didn't do much optimization to reduce GC by pooling, etc. Unity has very good profiling tools to track down allocations built in so they could have easily found significant sources of GC allocations and reduced them. I work on one of the larger Unity games and we always profile and try to pool everything to reduce GC hitches.


Apparently that was released in 2019? Both C# and dotnet have had multiple major releases since then, with significant performance improvements.


A good datapoint, thanks. Extending my original point - C# got really good in the last 5 years with regards to performance & low-level features. There might be an entrenched opinion problem to overcome here.


Anybody writing a game should be writing in a game engine. There are too many things you want in a game that just come "free" from an engine that you will spend years writing by hand.

GC can work or not when writing a game engine. However everybody who writes a significant graphical game engine in a GC language learns how to fight the garbage collector - at the very least delaying GC until between frames. Often they treat the game like safety critical: preallocate all buffers so that there is no garbage in the first place (or perhaps minimal garbage). Without garbage collection might technically use more CPU cycles, but in general they are spread out more over time and so more consistent.


It's hard to use C# without creating garbage. But it's not impossible. Usually you'd just create some arenas for your important stuff, and avoid allocating a lot of transient objects such as enumerators etc. So long as you can generate 0 bytes of allocation each frame, you won't need a GC no matter how many frames you render. The question is only this: does it become so convoluted that you could just as well have used C++?


Enumerators are usually value types as long as you use the concrete type. Using the interface will box it. You can work around this by simply using List<T> as the type instead of the IEnnumerable.

You have to jump through some hoops but it's really not that convoluted and miles easier than good C++.


The problem with it is that you don't know. The fundamental language construct "foreach" is one that may or may not allocate and it's hard for you as a developer to be sure. Many other low level things do this or at least used to (events/boxing/params arrays, ...).

I wish there was an attribute in C# that was "[MustNotAllocate]" which files the compilation on known allocations such as these. It's otherwise very easy to accidentally introduce some tiny allocation into a hot loop, and it only manifests as a tiny pause after 20 minutes of runtime.


Most often you do know whether an API allocates. It is always possible to microbenchmark it with [MemoryDiagnoser] or profile it with VS or Rider. I absolutely love Rider's dynamic program analysis that just runs alongside me running an application with F5, ideally in release, and then I can go through every single allocation site and decide what to do.

Even when allocations happen, .NET is much more tolerant to allocation traffic than, for example, Go. You can absolutely live with a few allocations here and there. If all you have are small transient allocations - it means that live object count will be very low, and all such allocations will die in Gen 0. In scenarios like these, it is uncommon to see infrequent sub-500us GC pauses.

Last but not least, .NET is continuously being improved - pretty much all standard library methods already allocate only what's necessary (which can mean nothing at all), and with each release everything that has room for optimization gets optimized further. .NET 9 comes with object stack allocation / escape analysis enabled by default, and .NET 10 will improve this further. Even without this, LINQ for example is well-behaved and can be used far more liberally than in the past.

It might sound surprising to many here but among all GC-based platforms, .NET gives you the most tools to manage the memory and control allocations. There is a learning curve to this, but you will find yourself fighting them much more rarely in performance-critical code than in alternatives.


While this would be nice for certain applications, I'm not sure it's really needed in general. Most people writing C# don't have to know about these things, simply because it doesn't matter in many applications. If you're writing performance-critical C#, you're already on a weird language subset and know you way around these issues. Plus, allocations in hot loops stand out very prominently in a profiler.

That being said, .NET includes lots of performance-focused analyzers, directing you to faster and less-allocatey equivalents. There surely also is one on NuGet that could flag foreach over a class-based enumerator (or LINQ usage on a collection that can be foreach-ed allocation-free). If not, it's very easy to write and you get compiler and IDE warnings about the things you care about.

At work we use C# a lot and adding custom analyzers ensuring code patterns we prefer or require has been one of the best things we did this year, as everyone on the team requires a bit less institutional knowledge and just gets warnings when they do something wrong, perhaps even with a code fix to automatically fix the issue.


If you know what types you're using, you do know. If you don't know what you're calling, that's a pretty high bar that I'm not sure C++ clears.


If you are calling SomeType.SomeMethod(a, b, c) then you don't know what combintions of a, b, c could allocate unless you can peek into it or try every combination of a, b and c. So it's hard to know in the general case even with profiling and testing.


The two biggest engines, Unreal and Unity, use a GC. Unity itself uses C#. C# is viable for games but you do need to be aware of the garbage you make.

It's really not that hard to structure a game that pre-allocates and keeps per frame allocs at zero.


At least for Unity, the actual problem lies in IL2CPP and not C#. I have professionally used C# in real-time game servers and GC was never a big issue. (We did use C++ in the lower layer but only for the availability of Boost.Asio, database connectors and scripting engines.)


Unity lets you use either IL2cPP (AOT) or Mono (JIT). Either way it will use Boehm GC which is a lot worse than the .NET GC. If your game servers weren't using Unity then they are using a better GC.


Yeah, we rolled our own server framework in .NET mainly because we were doing MMOs and there were no off-the-shelf frameworks (including Unity's) explicitly designed for that. In fact, I believe this is still mostly true today.


> one of the biggest problems Unity devs run into

Unity used Mono. Which wasn't the best C# implementation, performance wise. After Mono changed its license, instead of paying for the license, Unity chose to implement their infamous IL2CPP, which wasn't better.

Now they want to use CoreCLR which is miles better than both Mono and IL2CPP.


Except that is a matter of developer skill, and Unity using Mono with its lame GC implementation, as proven by CAPCOM's custom .NET Core fork based engine used for Devil May Cry on the PlayStation 5.


We can all agree Unity is terrible.

Would be nice to hear about a Rust Game engine, though.


Check Bevy.


GC in modern .NET runtime is quite fast. You can get very low latency collections in the normal workstation GC mode.

Also, if you invoke GC intentionally at convenient timing boundaries (I.e., after each frame), you may observe that the maximum delay is more controllable. Letting the runtime pick when to do GC is what usually burns people. Don't let the garbage pile up across 1000 frames. Take it out every chance you get.


> if you invoke GC intentionally at convenient timing boundaries (I.e., after each frame),

Manually invoking GC many times per second is a viable approach?


It can be, yes.

You're basically trading off worse throughput for better latency.

If you forcibly run the GC every frame, it's going to burn cycles repeatedly analyzing the same still-alive objects over and over again. So the overall performance will suffer.

But it means that you don't have a big pile of garbage accumulating across many frames that will eventually cause a large pause when the GC runs and has to visit all of it.

For interactive software like games, it is often the right idea to sacrifice maximum overall efficiency for more predictable stable latency.


This might be more problematic under CoreCLR than under Unity. Prematurely invoking GC will cause objects that are more likely to die in Gen 0 to be promoted to Gen 1, accumulate there and then die there. This will cause unnecessary inter-generational traffic and will extend object lifetimes longer than strictly necessary. Because live object count is the main factor that affects pause duration, this may be undesirable.

It might be more useful to use OSU! approach as a reference: https://github.com/dotnet/runtime/issues/96213#issuecomment-...

OSU! represents an extreme case where the main game loop runs at 1000hz, so for much more realistic ~120hz you have plenty of options.


If you could even just pass an array of objects to be collected or something, this would so much easier.

Magic, code or otherwise, sucks when the spell/library/runtime has different expectations than your own.

You expect levitation to apply to people, but the runtime only levitates carbon based life forms. You end up levitating people without their affects (weapons/armor), to the embarrassment of everyone.

There should be no magic, everything should be parameterized, the GC is a dangerous call, but it should be exposed as well (and lots of dire warnings issued to those using it).


> If you could even just pass an array of objects to be collected or something

If you have a bunch of objects in an array that you have a reference to such that you can pass it, then, by definition, those objects are not garbage, since they're still accessible to the program.


Yes. Use a WriteOnlyArray or whatever, Semantics aside though...

There should be some middle ground between RAII and invoking Dispose/delete and full blown automatic GC.


It has worked well in my prototypes. There is a reason a GC.Collect method is exposed for use.


At least for this instance you have a good idea which objects are "ripe" for collection. There should be some way to specify "collect these, my infra objects don't need to be".


Unity (and its GC) is not representative of the performance you get with CoreCLR.

The article discusses ref lifetime analysis that does have relationship with GC, but it does not force you into using one. Byrefs are very special - they can hold references to stack, to GC-owned memory and to unmanaged memory. You can get a pointer to device mapped memory and wrap it with a Span<T> and it will "just work".


Well, when I worked in Unity I used to compile C# code with the LLVM backend. It was as fast as C++ code would be. So Unity is perhaps an example in favor of C#.


Games would need alternative GC optimized for low latency instead of maximum throughput.

AFAIK it has been possible to replace the GC with alternative implementation for the past few years, but no one has made one yet.

EDIT: Some experimental alternative GC implementations:

https://github.com/kkokosa/UpsilonGC

https://www.codeproject.com/Articles/5372791/Implementing-a-...


Many of the top games in recent years have used it, so you've got a funny definition of "not viable".


Or roll their own, so they used GC in one way or another.


> not viable for games

> Unity devs run into

So it's viable but not perfect


Doesnt Unity use its own GC or transpiles to C++? Unity on .Net core is more than a year away, no?


It uses the prehistoric Mono GC. Additionally it transpiles IL to C++ due to many targets like consoles, and iDevices, not allowing for a JIT.

They also have a C# subset called Burst, which could have been avoided if they were using .NET Core.


C# has much better primitives for controlling memory layout than Java (structs, reified generics).

BUT it's definitely not a language designed for no-gc so there are footguns everywhere - that's why Rider ships special static analysis tools that will warn you about this. So you can keep GC out of your critical paths, but it won't be pretty at that point. But better than Java :D


> but it won't be pretty at that point

Possibly prettier than C and C++ still. Every time I write something and think "this could use C" and then I use C and then I remember why I was using C# for low-level implementation in the first place.

It's not as sophisticated and good of a choice as Rust, but it also offers "simpler" experience, and in my highly biased opinion pointers-based code with struct abstractions in C# are easier to reason about and compose than more rudimentary C way of doing it, and less error-prone and difficult to work with than C++. And building final product takes way less time because the tooling is so much friendlier.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: