Hacker News new | past | comments | ask | show | jobs | submit login

One of the issues with systems programming languages is that the definitions programmers use for "well-understood" terms vary wildly in actual practice.

For example, the term "side effects" has half a dozen different meanings in common use. A Haskell programmer wouldn't consider memory allocation to be a side effect. A realtime programmer might consider taking too long might be a side effect, hence tools like realtime sanitizer [0]. Cryptography developers often consider input-dependent timing variance a critical side effect [1]. Embedded developers often consider things like high stack usage to be a meaningful side-effect.

This isn't to say that a systems language needs to support all of these different definitions, just a suggestion that any systems language should be extremely clear about the use cases it's intending to enable and the definitions it uses.

[0] https://clang.llvm.org/docs/RealtimeSanitizer.html

[1] https://www.bearssl.org/constanttime.html




I've never seen anyone actually refer to time variance for either realtime or crypto as "side effects".

It's true that these are all somewhat related concepts, but I'm pretty sure the term "side effect" is consistently used in the functional sense.


The "functional sense" is the one that's underspecified for system programming. For example it considers allocation a pure operation, but that's actually implemented by modifying a global variable so how is it pure? One might argue that it's not observable, but so is printing to the console, which is usually taken as an example of an impure operation.


Even if you limit "side effects" to observable behavior in the abstract machine sense, it's not entirely clear what is meant by a function to be "pure".

GCC has two attributes for marking functions, "pure" and "const" (not the language const qualifier). C23 introduced the [[reproducible]] and [[unsequenced]] attributes, that are mostly modeled by the GCC extensions, but with some subtle but important differences in their description.

Turns out it's pretty hard to define these concepts if the language is not built around immutability and pure functions from the ground up.


>One of the issues with systems programming languages is that the definitions programmers use for "well-understood" terms vary wildly in actual practice.

I think he used side effect with a functional programming meaning. A pure function will just take immutable data and produce other immutable data without affecting state. A function which adds 1 to a number has no side effects, while a function that adds 1 to a number and prints to the console has side effects.


> A pure function will just take immutable data and produce other immutable data without affecting state

But state is open for interpretation. If I write (making up syntax, attempting to be language-agnostic)

  fnc foo uses scalar i produces scalar
  does
    return make scalar(i + 1)
  end fnc
One could argue that is not pure, and one would have to write

  fnc foo takes heap h, uses scalar i produces heap, integer
  does
    (newHeap, result) := heap.makeScalar(i + 1)
    return (newHeap, result)
  end fnc
That expresses the notion that this function destroys a heap and returns a new heap that stores an additional scalar (an implementation likely would optimize that to modify the heap that got passed in, but, to some, that’s an implementation detail)

> while a function that adds 1 to a number and prints to the console has side effects.

Again, that’s open for interpretation. If the program cannot read what’s on the console, why would that be considered a side effect? That function also heats my apartment and contributes my electricity bill.

Basic thing is: different programmers care about different things. Embedded programmers may care about minute details such as the number of cycles a function takes.


I'd also emphasize the point here is that if a systems-level programming language is going to call itself pure, it just needs a really, really careful definition of what exactly it means by pure. Purity is intrinsically relative [1]. That doesn't make it a bad goal, or a bad thing, and there's definitely a significant difference between a language striving for any meaning of "purity" versus one that doesn't care at all, but whatever definition the language designer is using should be very carefully defined. Particularly for a systems language, if by "systems language" one means "the sort of language that allows poking at low level details", because having lots of "low level" access greatly expands the scope of "things my code may be able to witness the stack for".

To give a degenerate-but-simple example of that, a low-level systems language striving for "purity" but that also allowed arbitrary memory reading for whatever reason ("deep low-level custom stack trace functionality") would technically be able to witness the effects of the stack changing due to function calls. You can just define that away as an effect (and honestly would probably have to), but I'd suggest being clear about it.

A denegenerate-but-often-relevant example is that a "pure" function in a language that considers memory allocation "pure" can crash the entire OS process by running it out of memory. That's so impure that not only can the execution context (thread, async context, whatever) that is running the program out of memory witness it, so can every other execution context in the process, indeed, whether they want to or not they have to! We generally consider memory allocation "pure" for pragmatic reasons, because we really have no choice, the alternative is to create such a restrictive definition of "pure" as to be effectively useless, but that is almost the largest possible "effect" we're glossing over!

[1]: https://jerf.org/iri/post/2025/fp_lessons_purity/#purity-is-...




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: