What you describe is already the status quo today. This proposal is still a big improvement as it makes resource management less error prone when you're aware to use it and _standardizes the mechanism through the symbol_. This enables tooling to lint for the situations you're describing based on type information.
My understanding is that "ad-hoc" in this context means that you are not defining a single generic implementation but adding individual impls for specific types. There's no fundamental difference between Java's overloading and Rust's implicitly inferred trait implementations in my opinion.
Rust's implementation is more orthogonal, so specifying which impl you want explicitly does not require special syntax. It's also based on traits so you'd have to use a non-idiomatic style if you wanted to use overloading as pervasively as in Java. But are those really such big differences? See my reply to the original where I post an example using Rust nightly that is very close to overloading in other languages.
> Strictly speaking, Rust doesn't support overloaded functions. Function overloading is when you define the same function but with different arguments, and the language selects the correct one based on the argument type. In this case, it's two different implementations of a trait for two different types.
You're right that there is no function overload in this article, just some implicit derefs.
I would argue however that Rust has overloaded functions (function resolution based on argument types) due to how it handles trait resolution with implicit inference. Rust may not have syntax sugar to easily define overloads and people generally try to avoid them, but using argument-based dispatch is extremely common. The most famous example is probably `MyType::from(...)`, but any single-method generic trait using the generics for the method arguments is equivalent to function overloading. There are also other techniques. Using nightly features you can get far enough so a consumer can use native function call syntax.
TypeScript compiles to JavaScript. It means both `tsc` and the TS program can share the same platform today.
With a TSC in Go, it's no longer true. Previously you only had to figure out how to run JS, now you have to figure out both how to manage a native process _and_ run the JS output.
This obviously matters less for situations where you have a clear separation between the build stage and runtime stage. Most people complaining here seem to be talking about environments were compilation is tightly integrated with the execution of the compiled JS.
Very nice article. I agree with the theses and the resulting design makes sense.
Regarding context handling, one issue that I struggle with is where the context should be attached: by the caller or callee? In particular, in the context of intermediate libraries, I would like to limit needless work attaching the context if the caller already handles it (some sort of opt-out).
Usually I just attach the context in the callee as it's the simplest, but I'd like some systematic/reusable approach that would control this. It would be nice if it fits in the type system too. I use a bunch of languages like Rust, TypeScript, Kotlin, Python and it feels like a larger design issue.
To give a more concrete example. In Node, `fs.readSync(path)` includes the error path when it fails: the callee attaches the context to the error. In Rust, `std::fs::read(path)` does not attach the path to the error: the caller is responsible for the context. I would like some lightweight way to control if the file read should include or not the path. The ancestor scanning example from the article is a case where caller context is good IMO, but usually callee context is better for unrecoverable errors. Since it's contextual, having control would be nice.
I've spent a lot of time thinking about error context management in the context of C++. There seems to be a pretty deep conflict between having consistent elegant error context and efficient high-performance code. Explicit delegation works but only sometimes without incurring a runtime overhead. Even trying to find a reasonable and robust middle ground has proven to be elusive.
Expanding on your caller/callee example, sometimes both are far removed from the path, e.g. only having a file descriptor. There are plenty of ways to resolve that file descriptor into a path but one that consistently has zero overhead except in the case where it is required to contextualize an error is not trivial. Being able to resolve outside context implies the management and transmission of context in cases where it is not needed.
I am also coming around to the idea that the error handling context needs to be at least partly co-designed with the logging APIs and internals to have a little more flexibility around when and where things happen. In most software neither of these is really aware of the other.
Great reply and it mirrors some of my reflections.
For Rust errors, I consider defining a single error enum for the crate or module to be an anti-pattern, even if it's common. The reason is because of what you described: the variants no longer match what can actually fail for specific functions. There's work on pattern types to support subsets of enums, but until then I found that the best is to have dedicated errors per pub function.
I also feel that there's an ecosystem issue in Rust where despite "error as values" being a popular saying, they are still treated as second class citizens. Often, libs don't derive Eq, Clone or Serialize on them. This makes it very tedious to reliably test error handling or write distributed programs with strong error handling.
For Java, my experience with Exceptions is that they don't compose very well at the type level (function signatures). I guess that it's part of why they're used more for coarse handling and tend to be stringy.
The solution that you propose is a great relatively-lightweight solution for enums compatible with `erasableSyntaxOnly`. I see also other comments discussing other solutions which are worth comparing.
From my side, I wanted to keep nominal typing and support for lightweight type-level variant syntax (I often use enums as discriminated union tags). Here is what I landed on:
const Foo: unique symbol = Symbol("Foo");
const Bar: unique symbol = Symbol("Bar");
const MyEnum = {
Foo,
Bar,
} as const;
declare namespace MyEnum {
type Foo = typeof MyEnum.Foo;
type Bar = typeof MyEnum.Bar;
}
type MyEnum = typeof MyEnum[keyof typeof MyEnum];
export {MyEnum};
I posted more details in the erasable syntax PR [0].
> This uses `unique symbol` for nominal typing, which requires either a `static readonly` class property or a simple `const`. Using a class prevents you from using `MyEnum` as the union of all variant values, so constants must be used. I then combine it with a type namespace to provide type-level support for `MyEnum.Foo`.
> Obviously, this approach is even more inconvenient at the implementation side, but I find it more convenient on the consumer side. The implementer side complexity is less relevant if using codegen. `Symbol` is also skipped in `JSON.stringify` for both keys and values, so if you rely on it then it won't work and you'd need a branded primitive type if you care about nominal typing. I use schema-guided serialization so it's not an issue for me, but it's worth mentioning.
> The "record of symbols" approach addresses in the original post: you can annotate in the namespace, or the symbol values.
I assume that they mean that you can use Rust as a language without its standard library. This matters here since the Kernel does not use Rust's standard library as far as I know (only the core module).
I'm not aware of semver breakage in the language.
Another important aspect is that Semver is a social contract, not a mechanical guarantee. The Semver spec dedicates a lot of place to clarify that it's about documented APIs and behaviors, not all visible behavior. Rust has a page where it documents its guarantees for libraries [0].