Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Experiment: Making TypeScript immutable-by-default (evanhahn.com)
104 points by ingve 1 day ago | hide | past | favorite | 111 comments




> If you figure out how to do this completely, please contact me—I must know!

I think you want to use a TypeScript compiler extension / ts-patch

This is a bit difficult as it's not very well documented, but take a look at the examples in https://github.com/nonara/ts-patch

Essentially, you add a preprocessing stage to the compiler that can either enforce rules or alter the code

It could quietly transform all object like types into having read-only semantics. This would then make any mutation error out, with a message like you were attempting to violate field properties.

You would need to decide what to do about Proxies though. Maybe you just tolerate that as an escape hatch (like eval or calling plain JS)

Could be a fun project!


One "solution" is to use Object.freeze(), although I think this just makes any mutations fail silently, whereas the objective with this is to make it explicit and a type error.

I used to have code somewhere that would recursively call Object.freeze on a given object and all its children, till it couldn't "freeze" anymore.

I thought Object.freeze threw an exception on mutation. Digging a little more, it looks like we're both right. Per MDN, it throws if it is in "use strict" mode and silently ignores the mutation otherwise.

Isn't the idea to get a compile time error, rather than a runtime exception?

const exploring = Object.freeze({ immutable: true }) exploring.thing = 'new'

Property 'thing' does not exist on type 'Readonly<{ immutable: true; }>'.ts(2339)

So it would be a simple way to achieve it.


It’s interesting to watch other languages discover the benefits of immutability. Once you’ve worked in an environment where it’s the norm, it’s difficult to move back. I’d note that Clojure delivered default immutability in 2009 and it’s one of the keys to its programming model.

I don't think the benefits of immutability haven't been discovered in js. Immutable.js has existed for over a decade, and JavaScript itself has built in immutability features (seal, freeze). This is an effort to make vanilla Typescript have default immutable properties at compile time.

Javascript DOES NOT in fact have built-in immutability similar to Clojure's immutable structures - those are shallow, runtime-enforced restrictions, while Clojure immutable structures provide deep, structural immutability. They are based on structural sharing and are very memory/performance efficient.

Default immutability in Clojure is pretty big deal idea. Rich Hickey spent around two years designing the language around them. They are not superficial runtime restrictions but are an essential part of the language's data model.


I didn't say that it does have exhaustive immutability support. I said the value of it is known. They wouldn't have added the (limited) support that they did if they didn't understand this. The community wouldn't have built innumerable tools for immutability if they didn't understand the benefits. And in any case, you can't just shove a whole different model of handling objects into a thirty year old language that didn't see any truly structural changes until ten years ago.

> I didn't say that it does have exhaustive immutability support

seal and freeze in js are not 'immutability'. You said what you said - "JavaScript itself has built in immutability features (seal, freeze)".

I corrected you, don't feel bad about it. It's totally fine to not to know some things and it's completely normal to be wrong on occasion. We are all here to learn, not to argue who's toy truck is better. Learning means going from state of not knowing to the state of TIL.

> you can't just shove a whole different model of handling objects into a thirty year old language

Clojurescript did. Like 14-15 years ago or so. And it's not so dramatically difficult to use. Far more simpler than Javascript, in fact.


Your toy truck is being overly pedantic

I am not being pedantic, there's critical fundamental conceptual difference that has real implications for how people write and reason about code.

There's performance reasoning, different level of guarantees, and entirely different programming model.

When someone hears "JS has built-in immutability features", they might think, "great, why do I even need to look at Haskell, Elixir, Clojure, if I have all the FP features I need right here?". Conflating these concepts helps no one - it's like saying: "wearing a raincoat means you're waterproof". Okay, you're technically not 100% wrong, but it's so misleading that it becomes effectively wrong for anyone trying to understand the actual concept.


Sure, though Immutability.js did have persistent data structures like Clojure.

yeah, immutability.js is a solid engineering effort to retrofit immutability onto a mutable-first language. It works, but: it's never as ergonomic as language-native immutability and it just feels like you're swimming upstream against JS defaults. It's nowhere near Clojure's elegance. Clojure ecosystem assumes immutability everywhere and has more mature patterns built around it.

In Clojure, it just feels natural. In js - it feels like extra work. But for sure, if I'm not allowed to write in Clojurescript, immutability.js is a good compromise.


I meant to point out that of course there is value in immutability beyond shared datastructures.

I tried Immutability.js back in the day and hated it like any bolted-on solution.

Especially before Typescript, what happened is that you'd accidentally assign foo.bar = 42 when you should have set foo.set('bar', 42) and cause annoying bugs since it didn't update anything. You could never just use normal JS operations.

Really more trouble than it was worth.

And my issue with Clojure after using it five years is the immense amount of work it took to understand code without static typing. I remember following code with pencil and paper to figure out wtf was happening. And doing a bunch of research to see if it was intentional that, e.g. a user map might not have a :username key/val. Like does that represent a user in a certain state or is that a bug? Rinse and repeat.


> immense amount of work it took to understand code without static typing.

I've used it almost a decade - only felt that way briefly at the start. Idiomatic Clojure data passing is straightforward once you internalize the patterns. Data is transparent - a map is just a map - you can inspect it instantly, in place - no hidden state, no wrapping it in objects. When need some rigidity - Spec/Malli are great. A missing key in a map is such a rare problem for me, honestly, I think it's a design problem, you cannot blame dynamically-typed lang for it, and Clojure is dynamic for many good reasons. The language by default doesn't enforce rigor, so you must impose it yourself, and when you don't, you may get confused, but that's not the language flaw - it's the trade-off of dynamic typing. On the other hand, when I want to express something like "function must accept only prime numbers", I can't even do that in statically typed language without plucking my eyebrow. Static typing solves some problems but creates others. Dynamic typing eschews compile-time guarantees but grants you enormous runtime flexibility - trade-offs.


It doesn't make sense to say that. Other languages had it from the start, and it has been a success. Immutable.js is 10% as good as built-in immutability and 90% as painful. Seal/freeze,readonly, are tiny local fixes that again are good, but nothing like "default" immutability.

It's too late and you can't dismiss it as "been tried and didn't get traction".


That's not what I said, and that's not what my reply is about. The value of immutability is known. That's the point of this post. The author isn't a TC39 member (or at least I don't think they are). They're doing what they can with the tools they have.

You didn't understand what you were replying to. Immutability cannot be discovered later on in that sense (in practice).

one thing that it's missing in JS to fully harness the benefits of immutability is some kind of equality semantics where two identical objects are treated the same

They were going to do this with Records and Tuples but that got scrapped for reasons I’m not entirely clear on.

It appears a small proposal along these lines has appeared in then wake of that called Composites[0]. It’s a less ambitious version certainly.

[0]: https://github.com/tc39/proposal-composites


yes, I'm aware of composites (and of the sad fate of Records and Tuples) and I'm hopeful they will improve things. One thing that I'm not getting from the spec is the behavior of the equality semantics in case a Date (or a Temporal object) is part of the object.

In other words, what is the result of Composite.equal(Composite({a: new Date(2025, 10, 19)}, Composite({a: new Date(2025, 10, 19)})? What is the result of Composite.equal(Composite({a: Temporal.PlainDate(2025, 10, 19)}, Composite({a: PlainDate(2025, 10, 19)})?


Records and Tuples were scrapped, but as this is JavaScript, there is a user-land implementation available here: https://github.com/seanmorris/libtuple

Userland implementations are never as performant as native implementations. That's the whole point of trying to add immutability to the standard.

even when performance might not be an issue or an objective, there are other concerns about an user land implementation: lack of syntax is a bummer, and lack of support in the ecosystem is the other giant one - for example, can I use this as props for a React component?

If we are pointing dates, ML did it in 1973, or if you prefer the first mature implementation SML, in 1983.

The Purely Functional Data Structures book, that Clojure data structures are based on, is from 1996.

This is how far back we're behind the times.


Cool. I didn’t realize ML had such a focus on immutability as well. I have never done any serious work in ML and it’s a hole in my knowledge. I have to go back and do a project of some sort using it (and probably one in Ocaml as well). What data structures does ML use under the hood to keep things efficient? Clojure uses Bagwell’s Hashed Array-Mapped Tries (HAMT), but Bagwell only wrote the first papers on that in about 2000. Okasaki’s book came out in 1998, and much of the work around persistent data structures was done in the late 1980s and 1990s. But ML predates most of that, right?

Also, interestingly Clojurescript compiler in many cases emits safer js code despite being dynamically typed. Typescript removes all the type info from emmitted js, while Clojure retains strong typing guarantees in compiled code.

It's redundant in single thread environment. Everyone moved to mobile while pages are getting slower and slower, using more and more memory. This is not the way. Immutability has its uses, but it's not good for most web pages.

Mutability is overrated.

Immutability is also overrated. I mostly blame react for that. It has done a lot to push the idea that all state and model objects should be immutable. Immutability does have advantages in some contexts. But it's one tool. If that's your only hammer, you are missing other advantages.

The only benefit to mutability is efficiency. If you make immutability cheap, you almost never need mutability. When you do, it’s easy enough to expose mechanisms that bypass immutability. For instance in Clojure, all values are immutable by default. Sometimes, you really want more efficiency and Clojure provides its concept of “transients”[1] which allow for limited modification of structures where that’s helpful. But even then, Clojure enforces some discipline on the programmer and the expectation is that transient structures will be converted back to immutable (persistent) structures once the modifications are complete. In practice, there’s rarely a reason to use transients. I’ve written a lot of Clojure code for 15 years and only reached for it a couple of times.

[1] https://clojure.org/reference/transients


Immutability is really valuable for most application logic, especially:

- State management

- Concurrency

- Testing

- Reasoning about code flow

Not a panacea, but calling it "overrated" usually means "I haven't felt its benefits yet" or "I'm optimizing for the wrong thing"

Also, experiencing immutability benefits in a mutable-first language can feel like 'meh'. In immutable-first languages - Clojure, Haskell, Elixir immutability feels like a superpower. In Javascript, it feels like a chore.


A lot of these concepts don't mean anything to most developers I've found. A lot of the time I struggle to get the guy I work with to compile and run his code. Even something relatively simple as determinism and pure functions just isn't happening.

This is shockingly common and most developers will never ever hear of Clojure, Haskell or Elixir.

I really feel there is like two completely different developer worlds. One where these things are discussed and the one I am in where I am hoping that I don't have to make a teams call to tell a guy "please can you make sure you actually run the code before making a PR" because my superiors won't can him.


Well, yes, if your shop hires poorly, immutability won’t save you. In fact, nothing will save you.

> Not a panacea, but calling it "overrated" usually means "I haven't felt its benefits yet" or "I'm optimizing for the wrong thing"

I think immutability is good, and should be highly rated. Just not as highly rated as it is. I like immutable structures and use them frequently. However, I sometimes think the best solution is one that involves a mutable data structure, which is heresy in some circles. That's what I mean by over-rated.

Also, kind of unrelated, but "state management" is another term popularized by react. Almost all programming is state management. Early on, react had no good answer for making information available across a big component tree. So they came up with this idea called "state management" and said that react was not concerned with it. That's not a limitation of the framework see, it's just not part of the mission statement. That's "state management".

Almost every programming language has "state management" as part of its fundamental capabilities. And sometimes I think immutable structures are part of the best solution. Just not all the time.


I think we're talking past each other.

> I like immutable structures and use them frequently.

Are you talking about immutable structures in Clojure(script)/Haskell/Elixir, or TS/JS? Because like I said - the difference in experience can be quite drastic. Especially in the context of state management. Mutable state is the source of many different bugs and frustration. Sometimes it feels that I don't even have to think of those in Clojure(script) - it's like the entire class of problems simply is non-existent.


Of the languages you listed, I've really only used TS/JS significantly. Years ago, I made a half-hearted attempt to learn Haskell, but got stuck on vocabulary early on. I don't have much energy to try again at the moment.

Anyway, regardless of the capabilities of the language, some things work better with mutable structures. Consider a histogram function. It takes a sequence of elements, and returns tuples of (element, count). I'm not aware of an immutable algorithm that can do that in O(n) like the trivial algorithm using a key-value map.


> I made a half-hearted attempt to learn Haskell

Try Clojure(script) - everything that felt confusing in Haskell becomes crystal clear, I promise.

> Consider a histogram function.

You can absolutely do this efficiently with immutable structures in Clojure, something like

      (reduce (fn [acc x]
                (update acc x (fn [v] (inc (or v 0)))))
              {}
              coll)
This is O(n) and uses immutable maps. The key insight: immutability in Clojure doesn't mean inefficiency. Each `update` returns a new map, but:

1. Persistent data structures share structure under the hood - they don't copy everything

2. The algorithmic complexity is the same as mutable approaches

3. You get thread-safety and easier reasoning for a bonus

In JS/TS, you'd need a mutable object - JS makes mutability efficient, so immutability feels awkward.

But Clojure's immutable structures are designed for this shit - they're not slow copies, they're efficient data structures optimized for functional programming.


> immutability in Clojure doesn't mean inefficiency.

You are still doing a gazillion allocations compared to:

  for (let i = 0; i < data.length; i++) { hist[data[i]]++; }
But apart from that the mutable code in many cases is just much clearer compared to something like your fold above. Sometimes it's genuinely easier to assemble a data structure "as you go" instead of from the "bottom up" as in FP.

Sure, that’s faster. But do you really care? How big is your data? How many distinct things are you counting? What are their data types? All that matters. It’s easy to write a simple for-loop and say “It’s faster.” Most of the time, it doesn’t matter that much. When that’s the case, Clojure allows you to operate at a higher level with inherent thread safety. If you figure out that this particular code matters, then Clojure gives you the ability to optimize it, either with transients or by dropping down into Java interop where you have standard Java mutable arrays and other data structures at your disposal. When you use Java interop, you give up the safety of Clojure’s immutable data structures, but you can write code that is more optimized to your particular problem. I’ll be honest that I’ve never had to do that. But it’s nice to know that it’s there.

The allocation overhead rarely matters in practice - in some cases it does. For majority of "general-purpose" tasks like web-services, etc. it doesn't - GC is extremely fast; allocations are cheap on modern VMs.

The second point I don't even buy anymore - once you're used to `reduce`, it's equally (if not more) readable. Besides, in practice you don't typically use it - there are tons of helper functions in core library to deal with data, I'd probably use `(frequencies coll)` - I just didn't even mentioned it so it didn't feel like I'm cheating. One function call - still O(n), idiomatic, no reduce boilerplate, intent is crystal clear. Aggressively optimized under the hood and far more readable.

Let's not get into strawman olympics - I'm not selling snake oil. Clojure wasn't written in some garage by a grad student last week - it's a mature and battle-tested language endorsed by many renowned CS people, there are tons of companies using it in production. In the context of (im)mutability it clearly demonstrates incontestable, pragmatic benefits. Yes, of course, it's not a silver bullet, nothing is. There are legitimate cases where it's not a good choice, but you can argue that point pretty much about any tool.


If there was a language that didn't require pure and impure code to look different but still tracked mutability at the type level like the ST monad (so you can't call an impure function from a pure one) - so not Clojure - then that'd be perfect.

But as it stands immutability often feels like jumping through unnecessary hoops for little gain really.


> then that'd be perfect.

There's no such thing as "perfect" for everyone and for every case.

> feels like jumping through unnecessary hoops for little gain really.

I dunno what you're talking about - Apple runs their payment backend; Walmart their billing system; Cisco their cybersec stack; Netflix their social data analysis; Nubank empowers entire Latin America - they all running Clojure, pushing massive amounts of data through it.

I suppose they just have shitload of money and can afford to go through "unnecessary hoops". But wait, why then tons of smaller startups running on Clojure, on Elixir? I guess they just don't know any better - stupid fucks.


The topic was immutability, not Clojure?

But ok, if mutability is always worse, why not use a pure language then? No more cowardly swap! and transient data structures or sending messages back and forth like in Erlang.

But then you get to monads (otherwise you'd end up with Elm and I'd like to see Apple's payment backend written in Elm), monad transformers, arrows and the like and coincidentally that's when many Clojure programmers start whining about "jumping through unnecessary hoops" :D

Anyway, this was just a private observation I've reached after being an FP zealot for a decade, all is good, no need to convert me, Clojure is cool :)


> Clojure is cool

Clojure is not "cool". Matter of fact, for a novice it may look distasteful, it really does. Ask anyone with a prior programming experience - Python, JS, Java to read some Clojure code for the first time and they start cringing.

What Clojure actually is - it is "down to earth PL", it values substance over marketing, prioritizes developers happiness in the long run - which comes in a spectrum; it doesn't pretend everyone wants the same thing. A junior can write useful code quickly, while someone who wants to dive into FP theory can. Both are first-class citizens.


> If there was a language that didn't require pure and impure code to look different

I've occasionally wondered what life would be like if I tried writing all my pure Haskell code in the Identity monad.


Same!

Next time I feel an itch to learn a language, I'll probably pick Clojure, based mostly on this comment. Not sure when that will be though.

One doesn't need to "wear a tie" to learn Clojure - syntax is so simple it can be explained on a napkin. You need to get:

1. An editor with structural editing features - google: "paredit vim/emacs/sublime/etc.", on VSCode - simply install Calva.

2. How to connect to the REPL. Calva has the quickstart guide or something like that.

3. How to eval commands in place. Don't type them directly into the REPL console! You can, but that's not how Lispers typically work. They examine the code as they navigate/edit it - in place. It feels like playing a game - very interactive.

That's all you need to know to begin with. VSCode's Calva is great to mess around it. Even if you don't use it (I don't), it's good for beginners.

Knowing Clojure comes super handy, even when you don't write any projects in it - it's one of the best tools to dissect some data - small and large. I don't even deal with json to inspect some curl results - I pipe them through borkdude/jet, then into babashka and in the REPL I can filter, group, sort, slice, dice, salt & pepper that shit, I can even throw some visualizations on top - it looks delicious; and it takes not even a minute to get there - if I type fast enough, I slash through it in seconds!

Honestly, Clojure feels to be the only no bullshit, no highfalutin, no hidden tricks language in my experience, and jeeeesus I've been through just a bit more than a few - starting with BASIC in my youth and Pascal and C in college; then Delphi, VB, then dotnet stuff - vb.net, c#, f#, java, ruby; all sorts of altjs shit - livescript, coffeescript, icedcoffeescript, gorillascript, fay, haste, ghcjs, typescript, haskell, python, lua, all sorts of Lisps; even some weird language where every operator was in Russian; damn, I've been trying to write some code for a good while. I'm stupid or something but even in years I just failed to find a perfect language to write perfect code - all of dem feel like they got made by some motherfluggin' annoyin' bilge-suckin' vexin' barnacle-brained galoots. Even my current pick of Clojure can be sometimes annoying, but it's the least irksome one... so far. I've been eyeing Rust and Zig, and they sound nice (but every one of dem motherfuckers look nice before you start fiddling with 'em) yet ten years from now, if I'm still kicking the caret, I will be feeding some data into a clj repl, I'm tellin' ya. That shit just fucking works and makes sense to me. I don't know how making it stop making sense, it just fucking does.


I just want a way of doing immutability until production and let a compiler figure out how to optimize that into potentially mutable efficient code since it can on those guarantees.

No runtime cost in production is the goal


This doesn’t make much sense. One of the benefits of immutability is that once you create a data structure, it doesn’t change and you can treat it as a value (pass it around, share it between threads without cloning it, etc.). If you now allow modifications, you’re suddenly violating all those guarantees and you need to write code that defensively makes clones, so you’re right back where you started. In Clojure, you can cheat at points with transients where the programmer knows that a certain data structure is only seen by a single thread of execution, but you’re still immutable most of the time.

> No runtime cost in production is the goal

Clojure's persistent data structures are extremely fast and memory efficient. Yes, it's technically not a complete zero-overhead, pragmatically speaking - the overhead is extremely tiny. Performance usually is not a bottleneck - typically you're I/O bound, algorithm-bound, not immutability-bound. When it truly matters, you can always drop to mutable host language structures - Clojure is a "hosted" language, it sits atop your language stack - JVM/JS/Dart, then it all depends on the runtime - when in javaland, JVM optimizations feel like blackmagicfuckery - there's JIT, escape analysis (it proves objects don't escape and stack-allocates them), dead code elimination, etc. For like 95% of use cases using immutable-first language (in this example Clojure) for perf, is absolutely almost never a problem.

Haskell is even more faster because it's pure by default, compiler optimizes aggressively.

Elixir is a bit of a different story - it might be slower than Clojure for CPU-bound work, but only because BEAM focuses on consistent (not peak) performance.

Pragmatically, for the tasks that are CPU-bound and the requirement is "absolute zero-cost immutability" - Rust is a great choice today. However, the trade off is that development cycle is dramatically slower in Rust, that compared to Clojure. REPL-driven nature of Clojure allows you to prototype and build very fast.

From many different utilitarian points, Clojure is enormously practical language, I highly recommend getting some familiarity with it, even if it feels very niche today. I think it was Stu Halloway who said something like: "when Python was the same age of Clojure, it was also a niche language"


> Also, experiencing immutability benefits in a mutable-first language can feel like 'meh'.

I felt that way in the latest versions of Scheme, even. It’s bolted on. In contrast, in Clojure, it’s extremely fundamental and baked in from the start.


exactly, react could not deal with mutable object so they decided to make immutability seem to be something that if you did not use before you did not understood programming.

programming with immutability has been best practices in js/ts for almost a decade

however, enforcing it is somewhat difficult & there are still quite a bit lacking with working with plain objects or maps/sets.


We shouldn't forget that there are trade-offs, however. And it depends on the language's runtime in question.

As we all know, TypeScript is a super-set of JavaScript so at the end of the day your code is running in V8, JSCore or SpiderMonkey - depending on what browser the end user is using, as an interpreted language. It is also a loosely typed language with zero concept of immutability at the native runtime level.

And immutability in JavaScript, without native support that we could hopefully see in some hypothetical future version of EcmaScript, has the potential to impact runtime performance.

I work for a SaaS company that makes a B2B web application that has over 4 million lines of TypeScript code. It shouldn't surprise anyone to learn that we are pushing the browser to its limits and are learning a lot about scalability. One of my team-mates is a performance engineer who has code checked into Chrome and will often show us what our JavaScript code is doing in the V8 source code.

One expensive operation in JavaScript is cloning objects, which includes arrays in JavaScript. If you do that a lot.. if, say, you're using something like Redux or ngrx where immutability is a design goal and so you're cloning your application's runtime state object with each and every single state change, you are extremely de-optimized for performance depending on how much state you are holding onto.

And, for better or worse, there is a push towards making web applications as stateful as native desktop applications. Gone are the days where your servers can own your state and your clients just be "dumb" presentation and views. Businesses want full "offline mode." The relationship is shifting to one where your backends are becoming leaner .. in some cases being reduced to storage engines, while the bulk of your application's implementation happens in the client. Not because we engineers want to, but because the business goal necessitates it.

Then consider the spread operator, and how much you might see it in TypeScript code:

const foo = {

  ...bar, // clones bar, so your N-value of this simple expression is pegged to how large this object is

  newPropertyValue,
};

// same thing, clones original array in order to push a single item, because "immutability is good, because I was told it is"

const foo = [...array, newItem];

And then consider all of the "immutable" Array functions like .reduce(), .map(), .filter()

They're nice, syntactically ... I love them from a code maintenance and readability point of view. But I'm coming across "intermediate" web developers who don't know how to write a classic for-loop and will make an O(N) operation into an O(N^3) because they're chaining these together with no consideration for the performance impact.

And of course you can write performant code or non-performant code in any language. And I am the first to preach that you should write clean, easy to maintain code and then profile to discover your bottlenecks and optimize accordingly. But that doesn't change the fact that JavaScript has no native immutability and the way to write immutable JavaScript will put you in a position where performance is going to be worse overall because the tools you are forced to reach for, as matter of course, are themselves inherently de-optimized.


Yes, if your immutability is implemented via simple cloning of everything, it’s going to be slow. You need immutable, persistent data structures such as those in Clojure.

Sounds easier to just use some other compile to js languge, its not like there are no other options out there.

I'm still mad about Reason/ReScript for fumbling the bag here.

Rescript/reasonml is still in development, and a more seasoned dev team can easily pick it as an better alternative to typescript.

Its a bummer haxe did not promote itself more for the web, as its a amazinlgy good piece of tech. The languge shows age, but has an awesome typesystem and metaprogramming capabilities.

That said, haxe 5 is on the horizon.


While TS allows easy integration with JS, this doesn't work well with other languages that compile to JS.

You lose all type benefits of libraries that are written in TS.


Its quite rare to see interop between compile to js languages tho. Also rare to see projects using more (if not in the middle of a rewrite/port) than one compile to js language. YMMV.

Agreed. Gleam is a great one that targets JavaScript and outputs easy to read code

Yup. Also rescript if your not a fan of the elm architecture.

Not if you want to use typescript.

Typescript is the obvious choice if all you know/want to learn is JS. But the languge is still garbage because of "valid js is valid ts".

And yes, i know that is what made it popular.


This is some criticism that lacks any depth or insight.

I've deployed projects in Elm, Scala, Clojure, Purescript and TypeScript has many great qualities that the others don't have.

It's an incredibly powerful language with a great type system which requires some effort in understanding, (e.g. 99% of candidates don't even know what a mapped type is, it's written in the docs...) and minimal discipline to not fall in the js pitfalls.

On top of that you have access to tons of tools and libraries, which alternative ecosystems either don't have (e.g. no compile-to-js language) or have to interoperate with at js level (anything from Reason to Gleam) anyway.

Beyond that, there's other important considerations in choosing a language beyond its syntax/semantics and ecosystem, such as hiring or even AI-friendliness.

Stricter TS is absolutely a valuable effort to chase.


The issue with TS is that its way too easy to fallback to unsafe code. Also the TS typesystem is WAY, WAY too complex. They pile new hard to grasp niche features that has made it really hard to grasp.

The TS sweetspot was (imho) somewhere around 1.8-2.0 era. These days you can run doom in the typesystem.

I cant say about hiring, as i dont hire a dev that knows language X, i hire engineers that know the ins and outs of how software should be written, and know when to pick Go, when to pick ocaml and when to go with C/Rust.

Also i would never use 99.9% of npm packages (js or ts) so i dont really care that much.

As an example writing typeheads for reasonml is not really that hard, as a benefit you know exactly what parts you use.

Also i dont use AI, and we dont accept any PR that are videcoded.


> 99% of candidates don't even know what a mapped type is, it's written in the docs

Please don't ask shit like that during interviews. For the love of god.


It is also how C++ and Objective-C got users from C land.

The examples on the JVM and CLR got it through, targeting the same bytecode, and Swift even if imposed from above, also had to make interop with Objective-C first class, and is now in the process to do the same for C++.

Turns out adoption is really hard, if a full rewrite is asked for, unless someone gets to pay for those rewrites, or gets to earn some claim to fame, like in the RIG and RIR stuff.


Rust compiles to wasm right?

ScalaJs!

I am a fan of immutability. I was toying around with javascript making copies of arguments (even when they are complex arrays or objects). But, strangely, when I made a comment about it, it just got voted down.

https://news.ycombinator.com/item?id=45771794

I made a little function to do deep copies but am still experimenting with it.

  function deepCopy(value) {
    if (typeof structuredClone === 'function') {
      try { return structuredClone(value); } catch (_) {}
    }
    try {
      return JSON.parse(JSON.stringify(value));
    } catch (_) {
      // Last fallback: return original (shallow)
      return value;
    }
  }

Aside: Why do we use the terms "mutable" and "immutable" to describe those concepts? I feel they are needlessly hard to say and too easily confused when reading and writing.

I say "read-write" or "writable" and "writability" for "mutable" and "mutability", and "read-only" and "read-only-ness" for "immutable" and "immutability". Typically, I make exceptions only when the language has multiple similar immutability-like concepts for which the precise terms are the only real option to avoid confusion.


Read only does not carry (to me) the fact that something cannot change, just that I cannot make it change. For example you could make a read only facade to a mutable object, that would not make it immutable.

Same reason doors say PUSH and PULL instead of PUSH and YANK. We enjoy watching people faceplant into doors... er... it's not a sufficiently real problem to compel people to start doing something differently.

"read-only-ness" is much more of a mouthful than "immutable"!

Generally immutability is also a programming style that comes with language constructs and efficient data structures.

Whereas 'read-only' (to me) is just a way of describing a variable or object.


This has really irrationally interested me now, Im sure there is something there with the internal setters on TS but damn I need to test now. My thinking is that overriding the setter to evaluate if its mutable or not, the obvious approach.

Yeah there's a lot you could do with property setter overrides in conditional types, but the tricky magic trick is somehow getting Typescript to do it by default. I've got a feeling that `object` and `{}` are just too low-level in Typescript's type system today to do those sorts of things. The `Object` in lib.d.ts is mostly for adding new prototype methods, not as much changing underlying property behavior.

This is tangential but one thing that bothers me about C# is that you can declare a `readonly struct` but not a `readonly class`. You can also declare an `in` param to specify a passed-in `struct` can’t be mutated but again there’s nothing for `class`.

It may be beside the point. In my experience, the best developers in corporate environments care about things like this but for the masses it’s mutable code and global state all the way down. Delivering features quickly with poor practices is often easier to reward than late but robust projects.


`readonly class` exists in C# today and is called (just) `record`.

`in` already implies the reference cannot be mutated, which is the bit that actually passes to the function. (Also the only reason you would need `in` and not just a normal function parameter for a class.) If you want to assert the function is given only a `record` there's no type constraint for that today, but you'd mostly only need such a type constraint if you are doing Reflection and Reflection would already tell you there are no public setters on any `record` you pass it.


I'm not sure if it's what you mean, but can't you have all your properties without a setter, and only init them inside the constructor for example ?

Would your 'readonly' annotation dictate that at compile time ?

eg

class Test {

  private readonly string _testString {get;}


  public Test(string tstSrting) 
      => _testString = tstSrting ;
}

We may be going off topic though. As I understand objects in typescript/js are explicitly mutable as expected to be via the interpertor. But will try and play with it.


I think you would want to use an init only property for your example

    class Test {
        public string Test { get; init; }
    }

I'm not a C# expert though, and there seems to be many ways to do the same thing.

I don't use the init decorator myself but I would hazard a guess it's similar. Don't quote me on that though.

The point does stand though, outside of modifying properties I'm not sure what a "private" class itself achieves.


> I don't use the init decorator myself but I would hazard a guess it's similar.

Genuinely curious, why not? Seems to be less verbose. I don’t write C#s so I’m not sure of the downsides of any particular feature.


I love this idea so so much. I have maybe 100k lines of code that's almost all immutable, which is mostly run on the honor system. Because if you use `readonly` or `ReadOnlyDeep` or whatnot, they tend to proliferate like a virus through your codebase (unless I'm doing it wrong...)

Definitely need purely functional data structures then. Is there a rich ecosystem for that for TypeScript?

fp-ts is the strictest fp implementation in typescript land.

https://gcanti.github.io/fp-ts/modules/

But the most popular functional ecosystem is effect-ts, but it does it's best to _hide_ the functional part, in the same spirit of ZIO.

https://effect.website/


> That should make arr[1] possible but arr[1] = 9 impossible.

I believe you want `=`, `push`, etc. to return a new object rather than just disallow it. Then you can make it efficient by using functional data structures.

https://www.cs.cmu.edu/~rwh/students/okasaki.pdf


At TypeScript-level, I think simply disallowing them makes much more sense. You can already replace .push with .concat, .sort with .toSorted, etc. to get the non-mutating behavior so why complicate things.

You might want that, I might too. But it’s outside the constraints set by the post/author. They want to establish immutable semantics with unmodified TypeScript, which doesn’t have any effect on the semantics of assignment or built in prototypes.

Well said. (I too want that.) I found my first reaction to `MutableArray` was "why not make it a persistent array‽"

Then took a moment to tame my disappointment and realized that the author only wants immutability checking by the typescript compiler (delineated mutation) not to change the design of their programs. A fine choice in itself.


How do immutable variables work with something like a for loop?

Is TFA (or anyone else for that matter) actually concerned with "immutable variables"?

e.g., `let i = 0; i++;`

They seem to be only worried about modifying objects, not reassignment of variables.


That's probably because reassignment is already covered by using `const`.

Of course, it doesn't help that the immutable modifier for Swift is `let`. But also, in Swift, if you assign a list via `let`, the list is also immutable.



Erlang doesn't allow variable reassignment. Elixir apparently does, but I've never played with it.

typescript handles that well already

Unless you need the index, you can write: for (const x of iterable) { ... } or for (const attribute in keyValueMap) { ... }. However, loops often change state, so it's probably not the way to go if you can't change any variable.

If you need the index, you can use .keys() or .entries() on the iterable, e.g.

    for (const [index, value] of ["a", "b", "c", "d", "e"].entries()) {
      console.log(index, value);
    }
Or forEach, or map. Basically, use a higher level language. The traditional for loop tells an interpreter "how" to do things, but unless you need the low level performance, it's better to tell it "what", that is, use more functional programming constructs. This is also the way to go for immutable variables, generally speaking.

There's no difference between for (x of a) stmt; and a.forEach(x => stmt), except for scope, and lack of flow control in forEach. There's no reason to prefer .forEach(). I don't see how it is "more functional."

You use something else like map/filter/reduce or recursion.

`for` loops are a superfluous language feature if your collections have `map` for transformations and `forEach` for producing side effects

Since sibling comments have pointed out the various ES5 methods and ES6 for-of loops, I'll note two things:

1. This isn't an effort to make all variables `const`. It's an effort to make all objects immutable. You can still reassign any variable, just not mutate objects on the heap (by default)

2. Recursion still works ;)


They don't work. The language has to provide list and map operations to compensate.

For immutability to be effective you'd also need persistent data structures (structural sharing). Otherwise you'll quickly grind to a halt.

Why would you quickly grind to a halt.

[flagged]


@dang likely a bot

Making web pages even more slow. Normal people complain about it all the time, arguing that modern programmers are lazy.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: