If you still haven't given TypeScript a go as a Javascripter, now is a great time to do so.
Whether you end up adopting it or not, it's interesting to get the types out of your mind and into the code. The first time you feel the speed/confidence of refactoring with accurate 'Find usages', you'll decide if the undeniable overhead of types is worth it.
> The first time you feel the speed/confidence of refactoring with accurate 'Find usages', you'll decide if the undeniable overhead of types is worth it.
For your information: in VSCode, the "Find all references" and "Rename symbol" work out-of-the-box, even if your code isn't typed.
Edit and disclaimer: Not sure why I'm getting downvoted, my comment doesn't contradict parent message. I personally type my code too (with Flow), it's an amazing tool for static code analysis and avoid errors at the runtime. I would suggest everyone give it a try (with TypeScript, Flow or whatever).
If the code isn't typed, then you can't find all references accurately. For example, in the below, you've no idea if the "a" inside the function is the same is as on "foo".
```javascript
var foo = {
a: true, // Find all references on "a" here...
b: "hello"
};
foo.a = false;
bar(foo);
function bar(obj) {
obj.a = false; // Won't find this "a".
}
```
Call site inference can follow this sometimes. However with types it can be certain, e.g.
```typescript
interface Foo {
a: boolean;
b: string;
};
var foo: Foo = {
a: true, // Find all references on "a" here...
b: "hello"
};
foo.a = false;
bar(foo);
function bar(obj: Foo) {
obj.a = false; // Will be found, renamed if refactored, etc..
}
```
Note that the JavaScript in VS Code is powered by the same engine as the TypeScript, so it's using the same inference, it just can't infer untyped parameters.
There is some support for JsDoc in the engine, so code like the below will work:
```javascript
/** @typedef {{a: boolean, b: string}} Foo */
/** @type {Foo} */
var foo = {
b: "hello"
};
foo.a = false; // Find all refs here
bar(foo);
/**
* @param {Foo} obj - Some object
*/
function bar(obj) {
obj.a = false; // will find this one
}
```
Thank you for actually providing an argument for having types. The parser could however in-line the function and replace obj for foo. I do advocate using the same variable names for function arguments though, as having different names for the same variable is not only hard to refactor, it's also very confusing! So instead of using "that", me, myself, obj, str, etc, use the actual variable name!
Either I do not understand your point or you are walking a thin line.
If a particular function can be tied to particular variable from scoping block, then the function is not entirely useful. On the other hand, if a function is general and has the same argument name as variable from scoping block, which is going to be passed in, mentally it is difficult to analyse the function without thinking about the variable which may shadow bugs in function. E.g. square root function may omit negative argument check, because variable from scoping block is always positive.
Instead of knowing scoping rules I advocate for distinct variable names, so scoping rules (name shadowing) do not even come into play.
I think the problem has to do with object oriented programming and classes.
Like when you have an Apple that inherits from Fruit, witch later starts looking like a Banana. With prototype you do not have that problem as you copy-paste code instead of couple code.
When you rename a variable and have to search more then one function or file, then there are bigger problems with your design, because you are basically using a global variable, even though it's praised as a module. Now does your refactor tool also fix documentation and other peoples code that depends on yours ?
function Person(name) {
var person = this;
person.name = name;
}
var person = new Person("Jon Doe");
alert("Welcome " + person.name);
person.name = "Harry Houdini";
// ES6 example:
var messages = people.map( person => person.name += " is a rock star" )
I don't know, but generally call-site inference has a couple of notable issues:
- If you don't have any calls to the function yet (i.e. "bar(foo)" wasn't there above), then there is nothing to infer from, so you have no idea what the type is at this point (other than by the use of the parameter inside the function).
- Is your call-site wrong? If the parameter type is explicit, this is easily checked. If not, then again you can only go by what the function does with it. (Which may not be much help if it's something like "RestAPI.post(JSON.stringify(theParamInQuestion))" ).
> For your information: in VSCode, the "Find all references" and "Rename symbol" work out-of-the-box, even if your code isn't typed.
I didn't downvote, but this seems incorrect to me. I'm sure that it works in small code bases, but once your code gets large enough (and dynamic enough) it's going to have to start missing things.
Why would their algorithm stop to work if your code gets large? It's not some kind of magic. :)
My current project has 10,000 LOC and 130 files. If I use "Rename symbol" on something imported in different files of my project, the editor will open all the files, edit all the names and just wait for me to save the changes.
Doesn't work at all for me. My project has about 500k+ LOC.
I suspect though it isn't the size of project, but certain patterns of design that work or don't work in non typed JS.
However Brackets does better at finding method definitions. Even in my mess of a project with non typed JS, Brackets can always find the method definition or at least give me possible choices.
VSCode, the "Find all references" and "Rename symbol" work out-of-the-box, even if your code isn't typed.
With good coding practice, references, "senders" and "implementers," and renaming worked well in Smalltalk. Where I missed types was during refactoring, and that also worked well about 99% of the time without types. It was that one last thing that we couldn't be 100% sure of that blocked the big refactoring we all wanted to do badly -- that's when I missed types!
VSCode has become a mind-blowingly good editor. It's closing in on the power of heavy-weight IDEs with much better performance than some code editors (e.g. Atom), even.
Both VSCode and Atom are based on Electron, which is slow and battery-intensive as it is based on NodeJS. Are you saying that recent VSCode versions are as performant as, say, a C++ based editor or even (shudder) a Java based editor such as Jetbrains (Webstorm, etc.)?
I last looked as VSCode (on a KDE 4 CentOS 7 desktop) about half a year ago, and it was terribly slow, even without extensions.
I would say VS Code is far more performant than the Jetbrains editors in my experience on a 2013 MBP. It may be that the Jetbrains editors are "doing more", but I always found them frustratingly sluggish - maybe a worthwhile trade off when using something like Scala but I personally couldn't deal with it for Javascript.
I'd really recommend trying it again. On a performant machine, VS Code feels like a native app to me aside from a slightly slower start up time, whereas Atom is noticeably slow. On my bottom spec 12" Macbook the difference is more noticeable, but everything feels slow on there.
I have to second Mr.timruffles. After using TS in a couple of projects I grew to miss it when I couldn't use it.
If you don't like the C#-ness of Typescript, try using Facebook's flow. Typing isn't a magic bullet, typescript isn't even type safe, but it sure makes development easier and your apps more stable.
Going one level deeper, the underlying truism is that computers make better bean counters than people, even programmers. Type systems are good insofar as they free up cognitive resources of the programmer. Conversely, type systems are bad when they make programmers burn attention resources without enough return. (This is a difficult thing for people to evaluate, because sometimes the "return" from a type system can pay off many years later.)
Again, it all comes down to cost-benefit. A new, modern 21st century language should be designed with the above in mind. The type system should make the resulting apps more stable, stay out of the way of coders reading code and writing new code, yet enable powerful tooling that doesn't need a heavyweight background process running a O(n^4) algorithm to collate meta-level information to enable all the IDE features. (Ideally, the tooling should be able to parse everything it needs to parse at some small multiple of the time it takes to read the source files off the drive.)
Again, it's all about cost-benefit! "Elegance" or popularity with tech hipsters be damned, what's the cost, and what does it get you? (The nickel-and-dime costs of waiting around for lugubrious and unresponsive tools not only add up, they operate with some huge multiplier effect.)
In my opinion, the critical aspect of typed code is the ability to declare a variable as either a number, string or binary. Let me declare this up front, let the compiler enforce it, and my mind is freed from semi-paranoid concerns about edge cases and spurious inputs.
But sometimes I don't care exactly what sort of number is used. This is where many type systems go overboard, by forcing me pick a 64 bit unsigned integer or whatever.
The thing is, you don't have to use the C#-ness of typescript. typescript started like that and I disliked it at first, but it's now very good at typing an entire codebase using mostly a functional programming style.
flow is unfortunately way behind in terms of robustness and IDE support.
Where do you perceive the C#-ness of TS coming from? TS and Flow are ~90% identical (most programs accepted by one would be accepted by the other) and TS adds only a few purely optional syntactic features (namespace, enum) that are widely found in other languages.
* Disclaimer: Correct me if I'm wrong, I haven't used flow extensively but I did check the docs.
One big thing is TypeScript classes are entirely different than standard JS classes. I guess you don't have to use them but it adds Abstract classes, interfaces and access control (eg private, protected vars).
I've used both. They're different monsters. Elm is a functional-first language and a reactive GUI framework all in one. Typescript is just a superset of JS.
Elm is pretty neat if you want to build a reactive GUI framework with everything immutable by default and have lots of functional programming idioms at your disposal. However that's all it does: if you want to do e.g. server-side rendering or use it in an angular project you're out of luck; if you want to interop with other JS code then it's a pain.
Typescript doesn't enforce any paradigm or bring any particular framework with it. It's just Javascript with type annotations and some extra features. So you can use it with React and Redux to create something like Elm's framework, but it won't be quite as safe or concise as you'd get with Elm. However it fits in much nicer with other JS libraries and can run on the server or alongside any JS GUI framework of your choice.
Elm is also still officially beta, and new releases often involve fairly large breaking changes.
Personally I started off a big fan of Elm but fell off the bandwagon as I ran into its limitations.
Other people's comments explain the details well, but I wanted to say that I have a large (real-world) project where the bulk of the app and all the business logic is in Elm, and all the more fiddly UI bits are in Typescript. The two languages communicate with each other over typesafe Elm ports and it all works fairly wonderfully.
what boundaries do you think Elm is pushing as a language?
It is a cool language and platform, but to me it looks like with time it has actually become more conservative.
TS on the other hand appears to contain some interesting new things on each release (possibly too many).
Case in point, Lookup and Mapped types in 2.1 seem like a brilliant idea.
- Good language design (immutable, based on haskell+OCaml, nice type unions/pattern matching, everything is an expression, etc)
Most people who write Elm are seduced by its elegance.
- Very, very typesafe. If it compiles, 99.5% of chances that you won't get runtime errors (baring a few bugs) handy when you're tired.
- It was made for beginners, it's easy to pick up, the tooling is simple, the compiler errors are easy to read.
- The "Elm architecture" provides a way to build simple SPAs, out of the box, without having to use 10 libraries like some people do in the JS world.
[Cons]
- The community is tiny
- Elm is... by far the most opinionated environment I've ever seen. More so than RoR/Ember.
Elm the language and Elm the framework have no clean escape hatches.
It provides guidance for newbies, but when you encounters problems, it stings.
- The Elm framework is entangled with the Elm language. One person pretty much maintains both; developments go at a very slow pace and potential contributors don't feel overly welcomed.
- Elm's type system is extremely simplistic. You won't be able to express many things with it, and your code will have a LOT of boilerplate.
- if you're a front end developer with a sharp attention to details, it's a pain to build complex things with it, especially SPAs who have a lot of state. You need hacks all the time as the language + the framework are fairly under powered and it gets new features once every blue moon. There are many things that are impossible to do in pure Elm, so you have to write javascript anyway. The official way is to use ports, but many teams seem to rely on the frown upon Native modules, which are written in awkward JS.
- Talking to Javascript (foreign interface) is a bit of pain (less so now than in the past though!) and so is reading/writing JSON, compared to typescript.
--------------
Typescript
--------------
[Pros]
- Extremely productive, you're never stuck with a problem
- There is an entire team of professional developers behind this project. beta and RCs are actually more robust than Flow releases, for instance. You can even depend on nightly builds... It's solid.
- Very typesafe (way more than Java for instance), if you know what you're doing.
- The type system is now very, very expressive (version 2.0 and 2.1 helped a lot)
- It's extremely easy to use JS libs in your typescript project
- Performances are good as the generated JS is pretty much just the TS without the types annotations. There is no runtime, shim, etc.
- You still have access to everything JS, so for instance you can use CSS modules with webpack just like if you were using JS. The community is enormous.
[Cons]
- It's still crappy old javascript underneath. Everything is mutable (worse, Array apis mix mutable and immutable styles), if/else are not expressions, the "this" keyword, prototypes, classes are complete madness.
- Not typesafe enough if you are careless. Lots of traps. Unlike Elm, you can actually shoot yourself in the foot by not knowing about certain compiler flags, using <any> too often (I use a no-any tslint rule) or generally not paying attention enough (you need to give hints to the compiler in the right places)
It's always going to be less typesafe than Elm though. For instance, functions are bivariant, ewww.
So for me, Elm is interesting conceptually and it has the potential to become good in a few years (perhaps compiling to web assembly) but typescript is the pragmatical choice in the short/medium term, unless you're building something fairly simple, in which case you can use absolutely whatever you want anyway!
I wholeheartedly agree with your cons for Typescript. If you're willing to move to a language that compiles to Javascript, you might as well go to something like Scalajs.
After trying TS, I basically never want to write JS again.
I know that 'OO' and 'typing' is not the solution to everything ... but aside from all the nice things you can do in TS ... the 'enforced architecture' of OO-ish paradigms, combined with typing, and essential obfuscation of the prototype paradigm ... has cut the time to development in half.
I can hardly think of a reason to use JS now that TS exists.
Of course - there are some reasons, in some specific situations, but by and large, TS is the future.
I've stopped using those OO-ish paradigms in JS (no 'this', 'new', 'prototype'), but even still, have much preferred using TS. You can get a long way (and have great flexibility) with only interfaces and generic functions.
Same, it's so much saner. classes or prototypes bring nothing but troubles to the table, except when you want to have tons of objects in a memory efficient way in library code.
TS does a great job without OO thanks to structural typing.
A class is a pretty bad abstraction to try to fit the world into. It might work for some things, but it usually results in a lot of ceremony, and a lot of concepts in your program that exist only to service the abstraction. It's useful in JS for some special cases, but being forced into it is just going to make your code verbose and complicated for no reason. I'll stick to Flow for now.
For the record, you're not forced into it at all. I'll mention that in the TypeScript compiler, we use very few classes at all - it's mostly written in a function-oriented style.
TypeScript tries not to impose any sort of restrictions on coding style.
"A class is a pretty bad abstraction to try to fit the world into"
I totally disagree with this. They are basically one of the best forms, if not 'the best' form of abstraction we have, particularly when it comes closer to the level of 'real world' abstraction.
Classes are the basis of 'typing' - which is to say, typing beyond primitive types.
They provide us with the ability to 'define things' that are not simply strings, numbers or booleans.
It's why JS has moved onto classes, and almost all popular languages use them.
The only time I find they can be limiting, is when there is the necessity to deal with looser typing, i.e. quick litoral objects ... but even then, 9 times out of 10 when I want to 'get around classes' - I'm just being lazy.
It's interesting to me that all of the initial reactions I've seen to this announcement have been around the introduction of async and object spread, which are available with babel, but the typescript specific features such as mapped types are completely ignored.
I don't really have any particular meaning behind that observation, only that it tickled my funny bone a little bit.
TypeScript PM here - the reason for that is that when we implement a feature in TypeScript, we take lengths to ensure that it is typed appropriately and that its performance characteristics are reasonable.
That means that when using object rest/spread, we didn't want to just ship an experience where type type is effectively `any`, leading users to be frustrated if they make an error.
With async/await, we had to rewrite our emit pipeline, which meant that we needed to keep parity in both output as well as time to emit. The investment has shown, and TypeScript is still extremely fast.
We're always willing to think about different approaches, but those are some of the rationales here.
Did you mean to reply to my comment? If so, to help clarify, I wasn't critiquing the long delay of those features; I merely was amused that they've gotten so much more attention than the far more novel features that were also released.
async/await in C# is pretty nasty and usually overused and overhyped for nothing but downside. It screws debugging, it screws call stacks, it dictates you to write bad code and it's just plain complicated. It infects your code up and down the stack and every new method MS now releases seems to be .SomethingAsync().
I likened it today to having a garden hose in a Victorian sewer but then declaring the sewer is the bottle neck, async/await to the rescue!
Most people use it in totally inappropriate scenarios, for no good reason but adding a load of complexity to your code and debugging.
But your sewer is now even bigger! Look at how you can throw that hose around!
I was working on a code base where the old developers had started going async/await for 'performance reasons'. Cue me ripping it all out, actually fixing the performance problems and downsizing the client from a P1 to a S1 in azure with 3x the load and vastly better page load speeds.
It's almost always a premature optimisation. My opinion is don't ever use it unless you actually know what the specific bottleneck is that async/await will fix and you actually need it.
I think it is unfair to compare async/await on top of poorly performing code with async/await generally. Fast synchronous code being faster than slow asynchronous code is kind of tautological.
async/await keeps your UI thread unblocked. Or, more generally, it keeps your threads unblocked. I have an ETL process that benefits from async/await greatly: I can stream more data in/out of the database with fewer threads.
That's not unique to the task-based asynchronous pattern, but it's far fewer lines of code to write than BeginExecuteReader/EndExecuteReader etc. so my appetite for doing it is much higher. And I believe asynchronous streaming (à la DbDataReader.ReadAsync) has no non-TAP equivalent.
Two steps forward, one step back. (You're right about debugging.)
Why is it tautological? There is no performance gain for most people using async/await, at all. That's simply not how it works. You have to be in a specific scenario, high load on the server's resources, not the DB, to see any benefit.
And while keeping the UI thread unlocked is great, how many people are using this in C# web code instead?
Where the entire web pipeline is already set up to naturally multi-thread by itself.
I'm curious what experience led to this opinion because I can't disagree more.
Most any .NET site doing volume should be using TPL, because the framework is incredibly efficient at managing threads and preventing the pipeline from getting clogged. I've worked on dozens of APIs and sites that need to deal with hundreds and thousands of concurrent requests. Handling those in a synchronous fashion or hand-rolling state management is horrible and I'd never want to go back to it.
The only downside I agree with is the debugging, but that gets better with every C#/.NET/VS release, and you really shouldn't have a ton of complexity buried in your await block. If it hurts, there's a good chance you're violating SOLID.
> There is no performance gain for most people using async/await, at all. That's simply not how it works. You have to be in a specific scenario, high load on the server's resources, not the DB, to see any benefit.
If you have two asynchronous things to do, and they are unrelated, start them both and then await Task.WhenAll(file1Download, file2Download). Look at the wall clock - it's up to twice as fast. Look ma, no threads or synchronization primitives!
> And while keeping the UI thread unlocked is great, how many people are using this in C# web code instead? Where the entire web pipeline is already set up to naturally multi-thread by itself.
Blocking operations ... block. Now your thread is doing nothing while your database churns/disk writes/network packet streams. As I understand it, using async/await with the SynchronizationContext in ASP.NET will yield the thread to another request. You get more requests per thread, which mitigates the problems with thread-per-request as described here [1]. Instead it's a combination of thread-based and event-driven models.
Not a lot of things need to access two files at once. You and I both know virtually all await/asyncs are doing a singular one thing like DB or emailing or a single file access.
So why bring it up?
And as for your condescending "Blocking operations ... block", talk about missing my entire point.
IIS is the victorian sewer, your code is the hose. IIS has many threads available to it, that's my entire point. You don't need async/await to free up those threads, it's got a ton of them ready to go.
Most programmers are not going to hit the thread limit, and when they do they can upgrade hardware while they figure out the tiny few high-impact async/awaits that would actually mitigate the problem.
It's pointless pre-optimization. We all talk about how evil it is, so why is it suddenly not evil with async/await?
I make plenty use of async requests to get files (images) and make web service calls at the same time. It's a Xamarin mobile application. Mobile apps are sometimes constrained by latency, so having a few requests in flight at once speeds things up.
Performance is not just fewer CPU cycles.
> IIS has many threads available to it, that's my entire point
My web services run on a VM with four cores. That's what I get given, and there's another twenty websites on it. The less overhead my code has the better. Just because it's not 100K TPS doesn't mean there aren't benefits.
I agree mostly. One of the biggest issues with async/await in C# is that if not used carefully you can get deadlocks. If you get even one such deadlock, the cost of debugging it can wipe out all your hardware savings. For that reason most people should not use async/await in C#.
Since there is no deadlock issue in JavaScript async/await should result in code that is easier to write as well as maintain.
Well, to me at least the mapped types are the main advantage of this release. Very frequently you start building a record incrementally (so, you can't assign it to an interface where all fields are mandatory), but then at a given stage you will validate that record and copy it to another variable which has the type with all the fields mandatory. Previously you had to define two copies, one with mandatory fields, another with all optional... kind of a bummer, and easy to update one and to forget the other!
Also, this greatly improves the type security of variables which work as keys to objects, without repetition of those keys on the interface and on the string literal.
For me, I'm excited because this announcement means I can finally remove Babel from my compile process and just rely on TypeScript to do all of the magic for me.
"the typescript specific features such as mapped types are completely ignored"
Oh, for me it's just because I'm to dumb to grasp them ;)
Joke aside, I knew the other features from JavaScript already. I used them for half a year now and know what they give me. The rest is something I probably will apreciate when the time comes and I need it.
Not surprising at all, parity with babel language support and ease to switch to typescript easily is very useful. Some features like the spread are not possible without the new features btw.
From my experience - missing these features was a bit of an irritation. But now, with what Typescript already offers, and the small missed stuff like spread, it is even more great.
Maybe it is me, but I feel like mapped types and keyof are terrible features, and using them would be a sign of a bad design/architecture.
In general, I don't think using a string that represents a static symbol (such as the name of a var, an attribute or a class) is a good idea.
I try to keep a simple stack (Typescript + NPM at the moment), and I prefer to wait for Typescript to have the feature I want, than to install a new dependency (Babel, for example) to my project.
I think there are two main kinds of developers who use TypeScript. Some developers come from traditional statically-typed languages, like C#/Java/C++/etc, and expect the language to conform to their idea of "good design". For these developers, "mapped types" are "not good design". Other developers come from JavaScript, Python, Ruby, or other duck-typed languages and think, "I know that this object is just a dictionary underneath it all, and I want a type system that lets me write code with that foundation." These developers want additional safety but they see traditional type systems as handcuffs.
These are two very different styles of programming, and TypeScript does a fairly good job of serving both groups--it would have to, in order to be as successful as it is. There are a ton of useful JavaScript libraries out there which couldn't easily be rewritten to conform to some Java-style type system, and there are a ton of skilled Java/C#/C++ engineers out there who don't want their name on a bunch of cowboy code.
I don't think this cultural division is going to go away any time soon, so it's in our best interest to believe that people in the "other camp" (whichever camp that is) are skilled and conscientious developers. I'm sure you've heard the arguments by dynamic language lovers who talk about how restrictive/slow/painful it is to write in a static type system, and I'm sure you're as tired of that argument as I am.
I'm sure you've heard the arguments by dynamic language lovers who talk about how restrictive/slow/painful it is to write in a static type system, and I'm sure you're as tired of that argument as I am.
At this point in time, it's a bit of a false dichotomy. It just amounts to a programming tool based on a form of meta-data. Of course people are going to have different ideas about the cost-benefit of that tooling. Of course, people are going to have differing opinions on how much of that tooling is worthwhile. Most of the problems with online discussions about this kind of tooling, come from people forgetting that it's just a discussion about tooling.
I don't think this is a false dichotomy. There's definitely a spectrum between static and dynamic type systems, and plenty of people who identify with either camp. Calling it "tooling" changes the name of the problem, there are still people who will complain when they think the tooling is going in the wrong direction, and those complaints have merits because so many of us will be forced to write code in a style we don't like.
There's definitely a spectrum between static and dynamic type systems
The key word is spectrum.
and plenty of people who identify with either camp.
What does it mean when people take something that's actually a spectrum, then divide that into two opposing camps? Does this sort of activity generally get public discourse closer to the truth, or farther away from it? It usually does the latter, in my experience. Hopefully, someone has set that knob to a position that works pretty well, and things work out.
those complaints have merits because so many of us will be forced to write code in a style we don't like.
Just who gets to write code in exactly the style that they like? From what I've seen, people generally make compromises, or they found the "perfect" job, or they are the one running the project, or they are going rogue and doing their own thing and introducing inconsistencies into a project's codebase.
Honestly, I feel like I am on the receiving end of some moralizing here—when someone tells me that my language "brings public discourse farther away from the truth" I wonder how I could have offended them so deeply.
The difference between a dichotomy and a spectrum is itself a false dichotomy. Ask any biologist what a species is, and they might stammer out some kind of weaselly definition full of hedges, and that same biologist might turn around and publish a dichotomous key which tells you how to identify a particular species according to an easy set of rules. The same goes for politics, human sexuality, and yes, type systems.
So I'm not going to hedge myself when I say that people "identify with either camp". They do. Neither is it a contradiction when I say that there's a spectrum. And when I say that there are two main kinds of people—I hope that my reader understands that I'm not a robot, and that I don't actually think that type systems are some kind of strict dichotomy.
> Just who gets to write code in exactly the style that they like?
I could point out that, again, like/dislike is a spectrum and not only a dichotomy. See above discussion.
Honestly, I feel like I am on the receiving end of some moralizing here—when someone tells me that my language "brings public discourse farther away from the truth" I wonder how I could have offended them so deeply.
Please re-read. I never said your programming language "brings public discourse farther away from the truth." Otherwise, please provide a quote. What I said is that some discussion about language "brings public discourse farther away from the truth."
I just re-read, and perhaps you are talking about human language. Also, no, I'm not offended. I'm just making an observation about how talking about things in a certain way can shape thought in non-beneficial ways.
Ask any biologist what a species is, and they might stammer out some kind of weaselly definition full of hedges, and that same biologist might turn around and publish a dichotomous key which tells you how to identify a particular species according to an easy set of rules. The same goes for politics, human sexuality, and yes, type systems.
When I encounter political discussions online, I sometimes have a person pattern-match something I've said, then declare all of my political beliefs for me. In those situations, false dichotomy -- or rather unawareness of the spectrum -- has distorted someone's thinking away from the truth.
I suspect this happens to Golang a lot. Basically, the Go language designers seem to be heavily into the Pareto Principle. So they are very willing to have a quite barebones level of tooling, and leave a lot of features out. This is especially true for their type system. They seem (to me) to have succeeded in getting most of the benefits of a type system while minimizing the disadvantages, while having much of the "feel" of a dynamic language, while also avoiding most of the downsides. It's a kind of pragmatic minimalism I've only encountered before in dynamic languages like Clojure, Lua, and Smalltalk.
I assume you mean lookup types rather than mapped types; the latter has nothing to do with strings.
In general, in application-facing code I think you're absolutely correct: using strings as lookups is a hack that should be discouraged. (Though sometimes it's convenient when dealing with other JS libraries).
However the feature brings some strong metaprogramming abilities that wouldn't be possible otherwise. For instance I'd have to imagine that mapped types were implemented by using those features, and the ability to define types as variants of other types (beyond ordinary generics) is something very powerful.
I'm primarily an F# user and have been begging for something like this there. The ability to define variations on record types (without and id/timestamp for inserts, with those fields otherwise; hashed/raw password fields for your various user records; etc) in a typesafe manner without a ton of repetition is a game changer.
I am basically 'anti religious' when it comes to software.
So many people argue back and forth about this or that, and 80% of arguments are academic and effete (this is what I mean be 'religious').
But I'm a big supporter of TS because I believe 'it makes sense' on almost every level.
I'm not a 'big supporter' of many things at all, if any.
I understand the limitations, and that it enforces some things for which we may want more flexibility ... but overall, I have to say I can't think of any reason ever to use JS again.
My hope is that V8 etc. build engines to run TS directly as opposed to having to transpile.
Is it possible to have a setup with TypeScript where it is guaranteed that no code changes occur other than removal of the type information?
I started using Flow, found what it can and can't do and would like to try TypeScript. But only if I can have "types-only", I don't want my code "translated" in any way. I'm writing for the latest node.js version and not for x different browsers, I want to use exactly what that version supports and have no code-changing steps.
With Flow I use flow-remove-types (https://github.com/leebyron/flow-remove-types) to remove the types. It leaves spaces where there was type-related code and doesn't touch the code itself.
You actually wrap the type annotations in comments and flow will recognize them. You can use the source with the type comments in your browser!
As an example, the flow checker will read the annotations from `function f(n/:number/, s/:string/) { ... }` but since they are commented in the source code your JS engine ignores them.
This is amazing for typing existing code, since at every step you are merely inserting comments into your code. And your existing minification step will strip them out :)
Then just install `flow-bin` and create an empty `.flowconfig` file in the root of your repo and you're done. Now you have Babel compiling your code the same as always and you can use the Flow type syntax, and with ESLint you probably already have your editor all setup to show you warnings.
We're trying to integrate more with tools like this so you don't have to go changing your entire workflow. It shouldn't be so hard to add types to your existing JavaScript code.
For eslint I'm using plugin "flowtype", and under "extends" I have "plugin:flowtype/recommended". Do you happen to know what's different compared to the eslint-flow package you mention? I took the first one I could find, didn't want to spend hours with the eslint part of using Flow since I was busy with all the errors reported by Flow during the conversion process.
EDIT:
Ah I think I get it, it's for editors that don't interface with Flow but only with eslint. Well, since WebStorm has Flow support - even though it's new and there are a few tickets open - I guess I should stick with the plugin that I already chose.
I figured if I start using Flow I may as well go all the way. My previous setup was to have JSDoc and Closure compiler style (inline) type annotations everywhere. The IDE - WebStorm - picked up on that. I filed tons of tickets with Jetbrains and got more and more issues resolved over time, but with the latest Webstorm there were some regressions and it seemed like they had difficulties with all of this. For example, I kept getting suggestions for properties picked up in clearly impossible places, autocompletion for the JSDoc types stopped working lately, etc.
So I had enough and started with Flow on one file, then two - then I spent a week converting everything to Flow. I actually found a few subtle bugs in my code, but also two or three minor ones in Flow (issues submitted). I also found clear limits for this type system, for example, when you clearly know 100% sure that a value cannot be undefined but Flow insists you add a type check before using the property because it does not follow your dynamic logic, only its types.
TypeScript and Flow have the same behavior with regard to removing types, and TypeScript and Babel have the same behavior with regard to downleveling ES6+ code to ES5-
If you target ES6 with TypeScript and only write ES6-compliant code (plus types), you won't get any downleveling of features, so your code will always be the same as you wrote it (other than the removal of types).
If you use TS-specific features like 'enum' or 'namespace', there will necessarily be some rewriting of your code to turn it into usable JS, but those features are entirely optional.
TypeScript seems to have been very religious about following "Bracha's law" (types shouldn't effect program semantics), and I'm unaware of any translations they do beyond supporting some new features on previous platforms (e.g. classes on pre-es6).
It is kind of annoying sometimes actually, in that typescript can't support niceties like operator overloading or extension methods because that would require de-sugaring.
> Is it possible to have a setup with TypeScript where it is guaranteed that no code changes occur other than removal of the type information?
Yes and no.
Typescript does a little bit of filling in the gaps of some missing features if the target platform does not support a feature you used. So your compiled code will still be ES6 (or ES5), but it may have some boilerplate added to polyfill for something if ES6 or ES5 (or others..) do not officially support it.
Looking at the discussion on GitHub for the issue that lead to this commit I'm a little amazed at how much difficulty some people seem to have had with the simple concept. "Just pass it through without code changes, only remove type information." Why does that lead to such convoluted discussions? Oh well, it's been done.
I assume it's because "no code changes" implies you could potentially be creating something that doesn't run where you want it to. Modern JS development means you need to be aware of the EcmaScript version you're writing in, and the version you're targetting. Once you're aware of it, transpiling with TypeScript means you don't necessarily need to target `esnext`: if you write ES5 and target ES5, it'll just remove the types as expected.
The downside of using `esnext` everywhere is that someone might be employing a feature that is not widely supported in the target browser/JS engine. So it's pretty bad to start with that as a default without an understanding of the consequences.
"no code changes" implies you could potentially be creating
something that doesn't run where you want it to.
Eh... I don't see how those arguments are possibly serious? Is this forced hand-holding of adult people truly your attitude? I deliberately select a non-default option. You (not directly you probably, but you chose to represent that POV) don't know anything about me or my situation. But immediately you start worrying about stuff you don't know anything about, that I might not know what I'm doing, might... could... possibly... hypothetically... Sorry, but this reply, or the attitude you presented which I know may not be your own, really pisses me off, to say it quite frankly. Don't even try to defend the forced(!) nanny attitude, even if you just tried to present what you think is other people's argument.
"You" in my comment means a hypothetical person who could be doing the changes. It wasn't directed at you personally, since it's obvious I know nothing about your context or environment.
I'll be more careful about writing "one" instead of "you" next time.
I recently started using TS instead of JS and I've been loving it. I find errors much earlier and while the tooling could be a bit better, I honestly find it less exhausting than keeping up with Babel.
Will always compile, even if propertyName is invalid. The only advantage over Object.assign I see is IDE support (at least in Intellij IDEA). I will still use:
TypeScript is great. I built https://www.findlectures.com over a year, starting in plain Javascript. Once the codebase was large enough that got stuck I added TypeScript, and it's been great for isolating defects.
It's nice paired with React (vs PropTypes) because the checking happens a lot earlier and is much richer.
How does the development cycle work? With plain JS I load up my html page in browser (chrome) and head to the console to check for errors in the JS. Then I do user testing.
Can type-script be debugged by a browser on a source code level - ie not on a transpiled level? If not, I am not sure if it's worth it. And I say that as someone who is a huge fan of explicit optional typing.
There is typically a quick transpile process in the IDE/toolchain of your choice. Most browser dev tools now support sourcemaps quite well which means that debugging shows the TS source and breakpoints are set directly there and typically show error stacktraces with your TS files.
Another option is to load Typescript in your browser and let it transpile directly in the browser. Sourcemaps in this case are a bit more iffy in their browser support, but it can be an option for a quick debug cycle if you wanted to try something like that. Not particularly the fastest idea for a production app, though.
The development cycle is basicaly this: write Typescript code in a .ts file, transpile to a .js file, load .js into html. Transpiling can be handled by the IDE or by a build tool like gulp. For example, IntelliJ Webstorm has a built-in file watcher that tracks changes to .ts files and transpiles accordingly.
Typescript can be debugged on a source code level, using source-maps. Webstorm and VS Code have built-in or plugin solutions. I heard some people complaining about issues, but i had a pretty smooth experience so far.
In theory source maps is the answer but personally this has only worked some of the time. When it does work though it is magical and exactly answers your question of debugging in the browser at the source level.
Very early on, Typescript cleverly built up an ecosystem around community-supported type definitions for popular js libraries. This makes type-checking and integration for those libraries dead simple.
2 years later, Flow is _still_ lagging behind in this area. [1] I'm not certain, but I think this may be due to Typescript allowing for a external header-like file while Flow requires inline types.
For this reason I believe, as many other commenters have noted, Typescript has nearly always been ahead of Flow in popularity.
You're right, this has been added recently and there's also now a community effort around 3rd party modules. [1]
Though it looks like it didn't become active until after at least 03/2016 [2].
I mention this only because TypeScript has been active in this area since 10/2012 [3], which is in-line with the original intent of my comment - that Flow is over 2 years (closer to 3.5 years) behind in building this out.
Its true and unfortunate. Typescript is the new Coffeescript. It splits the ecosystem. Flow is a progressive enhancement and improves the ecosystem. Typescript has had more push in the mindshare marketing from Microsoft.
TypeScript doesn't split the community any more than Flow does. TypeScript remains very close to ES2016/2017, syntax-wise, with the only major difference being type annotations, but you have those with flow as well. Anybody who can read JavaScript can read TypeScript.
One thing I never understood with Babel is which features are shimmed in the output JS and which features are re-implemented?
What I means is: I didn't know how to tell Babel which browsers I was targeting, and I'm pretty sure that some of their feature implementations do not feature test the platform before activating, since they were so compiled in. Is that the case?
Also, do you have to tell TypeScript your target runtime for it to it to use it's ES3 async/await logic vs. it's ES2015 (which uses generators), or does it automatically figure it out?
You do tell TypeScript which runtime you're targeting, and that's how it decides how to compile it. You either set `target` in your tsconfig.json, or you provide `--target` on the command line.
They're getting better at describing which features are shimmed, but the easiest way to check is by running a single plugin on a bit of code and seeing for yourself.
For instance, I decided just now to check if `transform-object-rest-spread` included an Object.assign polyfill (because spread syntax transpiles to Object.assign behind the scenes.)
With the following input:
var foo = { x: 1, y: 2 };
var bar = { a: 3, b: 4, ...foo };
var baz = new Promise(function(resolve, reject) {
resolve("quux");
});
I get the following output
var _extends = Object.assign || function (target) { for (var i = 1; i < arguments.length; i++) { var source = arguments[i]; for (var key in source) { if (Object.prototype.hasOwnProperty.call(source, key)) { target[key] = source[key]; } } } return target; };
var foo = { x: 1, y: 2 };
var bar = _extends({ a: 10, b: 11 }, foo);
var baz = new Promise(function(resolve, reject) {
resolve("quux");
});
The problem here, (to borrow phrasing from the docs) is that "Babel uses very small helpers for common functions such as _extend. By default this will be added to every file that requires it. This duplication is sometimes unnecessary, especially when your application is spread out over multiple files."
To solve this problem, many people use the `transform-runtime` plugin [1] as well. To put it shortly, it replaces inline helper functions (like _extends in the example above) with imports from an adapter module called 'babel-runtime', which itself exposes the relevant polyfills (regenerator[2] and core-js[3]).
With `transform-runtime`, I get the following:
import _Promise from "babel-runtime/core-js/promise";
import _extends from "babel-runtime/helpers/extends";
var foo = { x: 1, y: 2 };
var bar = _extends({ a: 10, b: 11 }, foo);
var baz = new _Promise(function (resolve, reject) {
resolve("foo");
});
Notice that the Promise implementation was polyfilled as well.
> I didn't know how to tell Babel which browsers I was targeting...
You might *really like babel-preset-env[4], a new project they just launched. Basically, you give it a set of browser targets (much like autoprefixer) and it works out exactly which plugins you need to support the latest (currently es2015, I think) version of javascript.
Hope that helps! Sorry for going on a Babel-related spiel in a Typescript thread, couldn't help myself. ;)
Deduplication of helper methods is now available in typescript too, using the 'importHelpers' compiler option. It will automatically add import statements to include the helpers from the 'tslib' package, see https://github.com/Microsoft/tslib
I've been using Typescript for over a year, and the amount of improvement in that relatively short time span has been incredible. With 2.1, Typescript shows no signs of slowing down.
Can someone comment on the difference in reliability between using typescript and a natively statically typed language like haskell or scala? Is there any? Or is the type safety really as good when you use ts
One thing you might miss in TS is the ability to just try to check or cast an object at run-time to check that it's really of some nominal type, or structurally compatible with some type/interface. TS can't do that because it doesn't have runtime type information for its types - the types are a compile-time only thing. TS does allow creating functions that return type predicates, but that requires your own manual checking as implementation, the type system can't help you write those as far as I know.
I was missing this mostly for incoming requests. A good solution for me has been to generate json-schema from typescript interface definitions, using the typescript-json-schema npm package. Then incoming data can be checked vs the generated json schemas. Works fine and I get json schemas for api endpoints from it as well, so no complaints.
It's a bit more complicated than that because of JS's prototype-oriented object system. If MyCustomType is a proper prototype (say you built it with the class keyword) you can certainly instanceof it. (That said there are strange edge cases where statically typed class-based object intuition falls down against prototype-oriented reality.)
The issues tend to start cropping up as you get further into TS-specific types like generic types. In a statically typed language you can inquire about the specifics of an implementation of a generic directly, and often even dispatch directly on that type. JS has no concept of your generic wrapper type and you need some other flag or logic to determine the runtime type of your generic code.
But in TS itself structural typing is used everywhere for actual type checking (except for classes with private members), and from my understanding there's no way to have TS help you check at runtime whether a type is structurally compatible with a value.
I haven't worked too much with Haskell, but I've spent some time with Scala - they have pretty advanced type systems.
Typescript can't match them 100%, but it's getting there. e.g. It doesn't have type variance, but it does have f-bounded polymorphism (which is unheard of in most mainstream languages).
The language is powerful enough to build your own constructs. I maintain a Typescript library[1] that emulates Scala options/trys/futures.
For day-to-day work, Typescript is a joy to work with. I describe it to others as the child of Scala and Ruby - pretty powerful type system but with a large community.
They're different things, TS is a transpiler while Haskell (GHCJS) and Scala (Scala.js) compile respective host languages to javascript.
In a perfect world you'd go with the latter, but there's overhead involved (relatively slow compilation and large generated binaries) that is mostly absent in TS. Also worth noting that TS' community is gigantic in comparison to that of Scala.js and GHCJS.
I was asking about the native languages themselves and their typesystems, not the compile-to-js versions of those languages.
Regardless, I don't see much of a difference between TS -> JS and haskell->JS and scala -> JS. It's all from one language to another. the fact that TS is still considered a form of javascript is kind of irrelevant other than for semantics
> I was asking about the native languages themselves and their typesystems, not the compile-to-js versions of those languages.
Are they different? If language X can be expressed in its entirety as language Y then we can speak about X and Y interchangeably -- that's literally the point: full interop between client and server.
Anyway, there's a world of difference between TS and GHCJS, Scala.js, Clojurescript, Js_of_ocaml, etc. One speaks javascript, the others speak both.
As for reliability, in so far as TS is typed it will provide the compile time guarantees that its type system provides; not surprisingly the same goes for Scala.js, GHCJS, etc.
So, the decision point probably hinges more on aforementioned overhead, size of community, and tooling than language/type system features/power. If it were otherwise we'd have seen a huge uptick by now wrt adoption rates for the compile-to-js languages. Maybe that will change in future but for now the big players drive the javascript bus...
TS->JS is very different from both in that the resulting JS code is the same as the typescript code with the types removed. In contrast Haskell and Scala compile to entirely different (and mostly unreadable) code
Haskell and Scala are safer than TypeScript... but not by too much :) The only major safety feature lacking in TypeScript are variance annotations and proper variance for functions, which are bivariant WRT their arguments. Short demonstration of the problem:
// this should not be assignable, functions should be
// contravariant WRT arguments, so if o2 is subtype of o1
// then F1 is subtype of F2, which means you can't assign
// a value of type F2 to F1: there are functions in the
// set of functions F2 that don't belong in the subset F1
type O1 = { a: string; b: string }
type O2 = { a: string; b: string; c: string }
type F1 = (o: O1) => string
type F2 = (o: O2) => string
// should not compile!
let f: F1 = (o: O2) => { return o.c.toString() }
f({a: '1', b: '2'}) // throws at run time (o.c undefined)
Other than that, TypeScript has been pretty safe since the addition of "strictNullChecks".
Haskell and Scala have algebraic datatypes (or case classes) which make it easier to model data (especially in Haskell where the syntax is really nice). TypeScript also supports disjoint unions (union of several types with a tag field that has a concrete string value as the type) and control flow analysis, so with some effort it can provide "idiomatic" JS-style ADTs. Haskell and Scala can do exhaustiveness checks, making sure you've covered all the cases. TS can also sort of do that (when strictNullChecks are on):
type T1 = { tag: 't1', value: string }
type T2 = { tag: 't2', value: number }
type T3 = { tag: 't3' }
type T = T1 | T2 | T3
function f(t: T):string {
switch (t.tag) {
case 't1': return t.value;
case 't2': return t.value.toString()
// Compiles only if you uncomment this line.
// Otherwise inferred return type string|undefined is
// incompatible with specified return type string
//case 't3': return 'N/A'
}
}
Not as pleasant or as general as Haskell, but better than most other mainstream languages.
Both Haskell and Scala have higher kinded generics, something that TypeScript still lacks. This means importing category theory concepts in TypeScript is pretty much impossible. The problem isn't just with category theory though; the need to parameterise generic types with other generic types as type arguments can come up everyday JS code too. For example, a library that accepts a promise constructor as an argument is parameterized by a generic type [1]
Before v2.1 TypeScript already had some powerful features for working with record types, such as record unions and intersections. With mapped types introduced in 2.1 however, you could say it surpasses Haskell and Scala in this regard - if not with features, then at least with pleasantness / ease of use. I believe its possible to achieve the same things in Haskell with vinyl [2][3], but those are definitely not Haskell's native records. Not sure if Scala's shapeless [4] can provide something similar (the answer is probably yes).
Haskell has type classes, and Scala can provide the "equivalent" via implicits+traits. Both of these allow a very nice and generic style of programming with implementation flexibility (you can write new typeclass instances for old code). TypeScript has structural interfaces, which offer some flexibility compared to nominal interfaces: you can write an interface that is a subset of old code, then use both old and new code under that interface without modifying the old code. Still, its not as flexible as typeclasses (the implementation must already be a subset of the original old code). They also can't offer the return-type-based implementation selection that typeclasses can.
Of course, Haskell and Scala both have various type / macro / template superpowers that TypeScript doesn't have and isn't likely to ever have. For everyday programming though, the above list should cover the vast majority of interesting features.
Async/await support for most browsers and node-0.12+ is definitely a welcome feature, callback hell and tripping over promise chains is definitely one of the most painful experience in TS development IMO.
Congratulation to the TS team. We are happily using it at work, it's such a tremendous upgrade over javascript, while keeping the entire ecosystem at hand.
Typescript seems to get a lot more attention (at least on HN). How does Flow progress? How is the community around it? I'm curious to know more as an external to both projects.
Try webpack, it's fast and the configuration is nearly declarative.
You can also create a separate build task to run a smoke test by directly calling the ts compiler to compile your code (skipping pre- and post- build steps). This was a lot faster than I expected.
I think I'm still waiting hopefully on an answer to supporting more automatic polyfill imports before banishing babel entirely. (Though the big one has been an Object.assign polyfill and certainly object spread removes a lot of the need for that.)
I also would need to straighten out some things with how I'm handling TSX I think.
Thank you (!) for your amazing contributions. TS is the best new thing in tech.
That said:
Your linguistic genius is way ahead of the tooling.
I feel as though some of these 'new and cool' 2.1 things are a little bit intellectual, maybe useful in some cases ...
But getting TS to work in the real world, the various build configurations, tool-chains etc. - it's still clumsy.
It was difficult to grasp the difference between AMD and other paradigms. I still have problems with circular dependencies, or rather, things happening before modules are loaded.
Here's one pain point:
Creating a static attribute on a class and initializing it right there, as in:
class A {
static b:B = new B();
}
Means that 'new B()' will get executed right when that module is loaded, possibly before the module containing B is loaded.
It's ugly, mechanical - but it's not a 'fine point'. I think these are the kinds of issues which are more likely to hold people back, as opposed to the lack of some rather fancy new paradigms such as 'Mapped Types'.
Your pain point sounds like a runtime problem with your module loader. You may want to investigate your module loader's issue tracker. (I recommend SystemJS [and jspm] over AMD these days, for what that is worth. It handles module loading quite well including all the complexity of circular dependencies that ES2015 specced and TS supports.)
«I feel as though some of these 'new and cool' 2.1 things are a little bit intellectual, maybe useful in some cases ...»
async/await and object spread should be useful in quite a lot of places. The other "intellectual" additions are immediately useful indirectly because they were built to support object spread (so that object spread types make sense), even if you don't wind up using the directly (because you aren't writing complicated type definition files for instance). Even then, you might find a bunch of uses for the types Partial<T> (partial objects) and Readonly<T> (shallowly frozen objects).
Really happy with the development pace. I started using it by contributing to VSCode and was very pleased with the great tooling and sane language and syntactic sugar.
Any plans to add C#-like extension methods to TypeScript? Or is there a way to achieve the same thing already? I know that a previous suggestion to add extension methods was closed as out of scope. But maybe it's time to revisit that, since TypeScript is now doing significant code transformations for downlevel await support.
JavaScript already allows you to do something very similar to extension methods by extending prototypes.
OTOH, a function accepting an object which implements a specific interface as a parameter will make your code much more easier to extend, instead of modifying the prototype of a concrete class.
It's not possible to add extension methods to TS in a way that would be usable in practice. https://github.com/Microsoft/TypeScript/issues/9 has a long discussion of the problems with trying to do this.
Basically, Python 3 added syntax for annotations, but most of the syntax is make sure it parses like code expressions would, then ignore whatever those code expressions are.
That would defeat the point of 'any', right? It's for variables that you want to allow to do anything (including be a T) without the compiler complaining.
If you don't want that behaviour, don't given x 'any' type. In that case there I'd probably use '{}' instead, and then use type guards (https://www.typescriptlang.org/docs/handbook/advanced-types.... to convince TypeScript that it's a T (whatever that means).
Maybe, but I think there's a lot of cases where you want to be able to use 'any' explicitly. Tsc does already have noImplicitAny, so you have to have opted in to this behaviour.
That only works for primitive types. As soon as you start using classes and sumtypes then you need an explicit parser. I wrote one for a client about a year ago. It took a lot of effort but well worth it.
You can create TypeScript models from C# already. TypeLITE[0] will generate TS models using T4 templates. TypeWriter[1] is a visual studio extension that regenerates TS models whenever you save a file.
JS admits that ... is two operators ("spread" and "rest"), but that they are essential related/cousins (dual in mathematics terminology) and it works well for both of them to look the same.
Rest is the variadic case:
function myFunction(a, b, ...rest) {
}
But rest can also be used for destructuring, such as:
let [a, b, ...rest] = ['a', 'b', 'c', 'd']
Or
let { a, b, ...rest } = { a: 'a', b: 'b', c: 'c', d: 'd' }
In this case it's a extra-bang-for-the-buck expansion of variadic parameters into something more generally useful: tearing apart lists (and now objects). From that light, spread is the "structuring" side of rest, in the left-hand/right-hand world of destructuring/structuring.
It depends on the context. In JS variadic parameters are also supported. This is like complaining they confusingly use curly braces around function bodies while you got used to them surrounding object declarations :)
I recently started learning JS, and now I am confused between TS and babel. Can anyone give me a reason why I should use either of the two and when I should use either of the two?
another anecdote - ive been using TS2.1 for the last month on three interconnected projects - a rest api, an express app and also for client side code in that express app - it has been a great experience. async/await is a godsend and @types/ makes what used to be a terrible process much more streamlined and easy. If you have to write JS, typescript is the best way ive ever found.
`--noImplicitAny` is the recommended flag. It is not set by default to ease the use on JS code bases.
The new changes makes `--noImplicitAny` require less type annotations; where the compiler can figure out the type of a variable through following the control flow.
Been a Linux developer for ages C# was never my taste, I'm still a bit Microsoft-hatred as of now(Visual Studio Code is the only item I adopted for JS development, the rest languages I still use vi/Geany). How tightly TS is related to C#? That has been the main reason I had not tried TS seriously so far. Don't want to have anything to do with C#. I know...
TS is a superset of ES6+. So, not much. Dart is closer to C#, I'd say.
C# is a well-designed language, though. You can't say that about the first iterations of JS and none of its baggage was ever dropped for the sake of maintaining backwards compatibility.
TS gently pushes you in a more modern direction, which means that those JS issues matter much less in practice. It works pretty well. It does what it's supposed to do; it makes working on larger projects much more manageable.
If you write a lot of JS, I highly recommend to check it out.
Once sometimes asked me "Isn't Typescript a language for C# developers that don't know JavaScript"
It's unfortunate that people think that TS is similar to C# just because it came from Microsoft. Flow language is like 80% similar to TS, but no one says it's similar to C#.
TS is just JS + new features from future JS specification + optional type system.
And optional type system is fundamentally different than the one in C#, and it was designed to fit well with JS patterns and idioms (structural vs nominal type system)
If you look at TS code that looks like C#, it's because JS looks like that (classes syntax, classical inheritance, lambda syntax - it's all ES7 (or ES2016/ES2017, or whatever it's called now)
They are not related at all, except in who is their original designer. However, if you are really so fundamentally opposed to anything and everything a certain language represents then you've got to hate a lot of languages for a really, really petty reason.
Whether you end up adopting it or not, it's interesting to get the types out of your mind and into the code. The first time you feel the speed/confidence of refactoring with accurate 'Find usages', you'll decide if the undeniable overhead of types is worth it.