> and (c) requires a language feature (sequence points) to disambiguate in which order side effects get executed. With Haskell, we just don’t have to care.
Reading up to this point, I had to chuckle a bit. I have struggled with the Haskell type system more than I care to admit; It's almost never "we just don't have to care" when comparing to most other popular languages.
That being said, this article does a nice job of gently introducing some of the basics that will trip up somebody who is casually looking through code wondering what *> and <*> and <* do. As usual, there is a steep learning curve because of stuff like this all over the codebase. If I walk away for a month, I need to revisit >>= vs >> and other common operators before I can be productive. It probably doesn't help that I never actually speak to a human about these concepts so in my head it's always ">>=" and not "bind."
I try to avoid >>= and >> (or *>) because I know it trips people up; do-notation is more than fine. The exception is when parsing with one of the parsecs where you get a lot of <* and *> usage and all the tutorials use those symbols.
But I like <|> , it feels very clear that it has a sort of "or" meaning.
Elixir doesn’t even need a special syntax — it gets Haskell’s LambdaCase as a natural consequence of case being just another function, and the pipe operator always chaining by the first argument.
Haskell’s >>= is doing something slightly different. ‘getThing >>= \case…’ means roughly ‘perform the action getThing, then match the result and perform the matched branch’
Whereas ‘getThing |> \case…’ means ‘pattern match on the action getThing, and return the matched action (without performing it).
The >>= operator can also be used for anything satisfying the Monad laws, e.g. your actions can be non-deterministic.
Pedantry: >>= can be used for any instance of the Monad typeclass. GHC isn't sufficiently advanced to verify that a given Monad instance satisfies the monad laws; it can verify that the instance satisfies the type definition, but for e.g. the identity law "m >>= return === m" the instance can still return something violating the law (not something substitutable with m).
Being able to pipe into def, defp, defmodule, if, etc. is fantastic! But apparently you say at least in the case of case, it's not a macro, and it's just a function which returns a monad—I guess it's the same effect afterall? Is that why people say Haskell can get away with lack of Lisp-style macros because of its type system and laziness?
I was wrong about this. Case is a macro (a special-cased macro even, defined at https://github.com/elixir-lang/elixir/blob/4def31f8abea5fba4...), not a function. It works with pipes because in the AST it's a call, unlike in Haskell where case is its own thing.
> I try to avoid >>= and >> (or *>) because I know it trips people up; do-notation is more than fine
Interesting. Probably it's just me, but I first learned monads using >>= (in OCaml), so at the beginning I found the Haskell do notation more confusing (and indentation rules didn't help). >>= is just a function and I understand its signature well. On the other hand, "do" is a syntactic sugar that I sometimes had to "mentally desugar" in some cases.
> It's almost never "we just don't have to care" when comparing to most other popular languages.
Struggling with Haskell type system is not an experience of somebody who has developed an intuition about Haskell type system. Granted, it is not a binary thing, you can have good intuition about some parts of it and struggle with others.
I think they way you put it is, while technically true, not fair. Those "most other" languages are very similar to one another. It is not C# achievement, that you don't struggle with its type system coming from Java.
This is like people struggling with Rust because of burrow checker, well, they have probably never programmed with burrow checker before.
Good intuition is a gift from god. For the rest of us, notational confusion abounds. Types are great. Syntax (not semantics) and notation can be a major bummer.
I think use makes master: the more you use it, the more mastery you will have and the better intuition will follow.
I struggled with the borrow checker because I’m smarter than it is, not because I haven’t worked with one before. Mainly I say “I’m smarter” because I’ve worked on big projects without it and never had any issues. Granted, I’ve only gotten in a fight with it once before giving up on the language, mainly because it forced me to refactor half the codebase to get it to shut up and I had forgotten why I was doing it in the first place.
Let's be realistic, rust is great, especially compared to C.
But it shifts the need to memorize undefined behavior with the need to memorize the borrow checker rules.
If you are dealing with common system level needs like double linked lists, rust adds back in the need for that super human level memory of undefined behavior, because the borrow checker is limited to what static analysis can do.
IMHO, the best thing Rust could do right now is more clearly communicate those core limitations, and help build tools that help mitigate those problems.
Probably just my opinion, and I am not suggesting it is superior, but zig style length as part of the type is what would mitigate most of what is problematic with C/C++
Basically a char myArray[10];, really being *myArray is the main problem.
Obviously the borrow checker removes that problem, but not once you need deques treeps etc...
If I could use Rust as my only or primary language, memorizing the borrow checker rules wouldn't be that bad.
But it becomes problematic for people who need to be polyglots, in a way that even Haskell doesn't.
I really think there's ways for Rust to grow into a larger role.
But at this point it seems that even mentioning the limits is forbidden and people end up trying to invert interface contracts, and leak implementation details when they're existing systems are incompatible with the projects dogma.
It is great that they suggest limiting the size of unsafe code blocks etc....but the entire world cannot bend to decisions that ignores the real world nuances of real systems.
Rust needs to grow into a language that can adjust to very real needs without and many real needs will never fit into what static analysis can do.
Heck C would be safer if that was the real world.
I really do hope that the project grows into the role, but the amount of 'unsafe' blocks points to them not being there today, despite the spin.
> But it shifts the need to memorize undefined behavior with the need to memorize the borrow checker rules.
With one massive difference: the borrow checker will tell you when you're wrong at compile time. Undefined behaviour will make your program compile and it will look like your program is fine until it's not.
> But it shifts the need to memorize undefined behavior with the need to memorize the borrow checker rules.
You have a choice: Fight the borrow checker on the front end until your code compiles, or chase down bugs on the back end from undefined behavior and other safety problems. No single answer there, it depends on what you're doing.
Cognitive load is a big issue. In Rust I find this comes in a couple of phases, the first being figuring out what the borrow checker does and how to appease it. The second is figuring out how to structure your data to avoid overusing clone() and unsafe blocks. It's a bit like learning a functional language, where there's a short-term adaptation to the language, then a longer-term adaptation to how you think about solving problems.
You mention doubly-linked lists, which are a good example of a "non-star" data structure that doesn't nicely fit Rust's model. But: for how many problems is this the ideal data structure, really? They're horrible for memory locality. Those of us who learned from K&R cut our teeth on linked lists but for the last 20 years almost every problem I've seen has a better option.
> IMHO, the best thing Rust could do right now is more clearly communicate those core limitations, and help build tools that help mitigate those problems.
rust-analyzer is the tool that does this. Having it flag errors in my editor as I write code keeps me from getting too far down the wrong road. The moment I do something Rust thinks is goofy, I get a red underline and I can pause to consider my options at that moment.
In my particular case, I was absolutely sure it was safe without having to test it. Would it remain so forever? Probably not because software changes over time.
In those cases what you are supposed to do is write your logic using unsafe, and wrap it with lifetime safe APIs. That way your "smartness" joins efforts with rusts, and not-so-smart people can use your APIs without having to be as smart.
Funny, but I remember the difference between `>>=` and `>>` even though I haven't written Haskell in a couple of years.
`>>=` passes to the right the value coming from the left, while `>>` drops it.
To give an example, you use `>>=` after `readFile` to do something with the contents.
You use `>>` after `putStrLn` since `putStrLn` doesn't return a meaningful value.
Reading up to this point, I had to chuckle a bit. I have struggled with the Haskell type system more than I care to admit; It's almost never "we just don't have to care" when comparing to most other popular languages.
That being said, this article does a nice job of gently introducing some of the basics that will trip up somebody who is casually looking through code wondering what *> and <*> and <* do. As usual, there is a steep learning curve because of stuff like this all over the codebase. If I walk away for a month, I need to revisit >>= vs >> and other common operators before I can be productive. It probably doesn't help that I never actually speak to a human about these concepts so in my head it's always ">>=" and not "bind."