Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I imagine people who care about this sort of thing are happy to disable overcommit, and/or run Zig on embedded or specialized systems where it doesn't exist.




There are far more people running/writing Zig on/for systems with overcommit than not. Most of the hype around Zig come from people not in the embedded world.

If we can produce a substantial volume of software that can cope with allocation failures then the idea of using something than overcommit as the default becomes feasible.

It's not a stretch to imagine that a different namespace might want different semantics e.g. to allow a container to opt out of overcommit.

It is hard to justify the effort required to enable this unless it'll be useful for more than a tiny handful of users who can otherwise afford to run off an in-house fork.


> If we can produce a substantial volume of software that can cope with allocation failures then the idea of using something than overcommit as the default becomes feasible.

Except this won't happen, because "cope with allocation failure" is not something that 99.9% of programs could even hope to do.

Let's say that you're writing a program that allocates. You allocate, and check the result. It's a failure. What do you do? Well, if you have unneeded memory lying around, like a cache, you could attempt to flush it. But I don't know about you, but I don't write programs that randomly cache things in memory manually, and almost nobody else does either. The only things I have in memory are things that are strictly needed for my program's operation. I have nothing unnecessary to evict, so I can't do anything but give up.

The reason that people don't check for allocation failure isn't because they're lazy, it's because they're pragmatic and understand that there's nothing they could reasonably do other than crash in that scenario.


Have you honestly thought about how you could handle the situation better than an crash?

For example, you could finish writing data into files before exiting gracefully with an error. You could (carefully) output to stderr. You could close remote connections. You could terminate the current transaction and return an error code. Etc.

Most programs are still going to terminate eventually, but they can do that a lot more usefully than a segfault from some instruction at a randomized address.


I used to run into allocation limits in opera all the time. Usually what happened was a failure to allocate a big chunk of memory for rendering or image decompression purposes, and if that happens you can give up on rendering the current tab for the moment. It was very resilient to those errors.

Even when I have a cache - it is probably in a different code path / module and it would be a terrible architecture that let me access that code.

A way to access an "emergency button" function is a significantly smaller sin than arbitrary crashes.

I question that. I would expect in most cases that even if you manage to free up some memory you only have a little bit longer to run before something else uses all the memory and you are back to the original out of memory problem but no place to free up more. Not to mention those caches you just cleared should exist for a good reason and so your program is running slower in the mean time.

What if for my program, 99.99% of OOM crashes are preventable by simply running a GC cycle?

> If we can produce a substantial volume of software that can cope with allocation failures then the idea of using something than overcommit as the default becomes feasible.

What would "cope" mean? Something like returning an error message like "can't load this image right now"? Such errors are arguably better than crashing the program entirely but still worth avoiding.

I think overcommit exists largely of fork(). In theory a single fork() call doubles the program's memory requirement (and the parent calling it n times in a row (n+1)s the memory requirement). In practice, the OS uses copy-on-write to avoid both this requirement and the expense of copying. Most likely the child won't really touch much of its memory before exit or exec(). Overallocation allows taking advantage of this observation to avoid introducing routine allocation failures after large programs fork().

So if you want to get rid of overallocation, I'd say far more pressing than introducing alloc failure handling paths is ensuring nothing large calls fork(). Fortunately fork() isn't really necessary anymore IMHO. The fork pool concurrency model is largely dead in favor of threading. For spawning child processes with other executables, there's posix_spawn (implemented by glibc with vfork()). So this is achievable.

I imagine there are other programs around that take advantage of overcommit by making huge writable anonymous memory mappings they use sparsely, but I can't name any in particular off the top of my head. Likely they could be changed to use another approach if there were a strong reason for it.


I never said that all Zig users care about recovering from allocation failure.

> Most of the hype around Zig come from people not in the embedded world.

Yet another similarity with Rust.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: