Hacker News new | past | comments | ask | show | jobs | submit login
Deno Is Webby (jim-nielsen.com)
407 points by todsacerdoti on March 16, 2022 | hide | past | favorite | 206 comments



One of the things I love about Deno is that since it implements standard web APIs, many libraries built for the browser just work. For example, I recently made a simple static site generator with Deno to make my blog, and I found that I could use Marked for markdown support simply by importing it like this:

  import 'https://cdn.jsdelivr.net/npm/marked@3.0.7/marked.min.js';
  // now I can use window.marked()


One of the many things about Node that Dahl explicitly wanted to "fix"* when building Deno was that the global namespace should be `window`, because that's what it is in JavaScript's natural habitat.

*scare-quotes because the reader might feel strongly opposed to the term, not because I have an agenda


> that the global namespace should be `window`, because that's what it is in JavaScript's natural habitat.

Oh wow, I had no idea. That rules!


ecmascript now has both `globalThis` and `self` that can act as global namespace (they all point to the same object except in workers as window and globalThis cannot be used in workers)


No, `globalThis` is available in workers. It points to `self`. On the page, `globalThis` points to `window`. `globalThis` thus becomes the one, safe, global reference you can make for code that runs in both pages and workers.


‘self’ works on pages too.


Thanks, I misunderstood this.


That does seem like it will break a common pattern in web app code that runs both in browsers and on a server (such as a React app with server-side rendering) to determine whether you’re on the browser or the server, which is to check if typeof window === “undefined”.


A lot of the time that pattern is written, it's because the APIs from the web and Node are different and will break if you call something that's not isomorphic. Perhaps Deno fixes that as well and the pattern isn't needed in the first place. Obviously not every case though.


In my experience, I've never seen that used just because the API is different. There are polyfills for fetch and other things, to align node and the browser. When I've seen that check, it's been entirely because the DOM API was needed. Or fetching was required on the client-side only.


I'm surprised nobody else has called this out - implementing browser APIs in a non-browser runtime seems like a really bad idea. Take a look at https://developer.mozilla.org/en-US/docs/Web/API/Window - the submitted article seem to have demonstrated literally the only APIs that make sense to implement in a terminal. What would happen when a library tries to call window.location or window.history? Almost none of these map neatly to any possible implementation on the server or in a scripting environment.


Do you have a reasonable example of why a library that would call location or history?

All I can think of is "using query string as getenv() for random debug hacks" or "using history as a hack to synchronously reach the structured clone algorithm", neither of which a library should do (but both easily emulatable).


window.location could be repurposed as a way to get or change the current working directory? Though the idiom of "window.location = <other path>" as a navigation mechanism would not really be portable.

That said, there are other browser APIs that I would love to see available elsewhere:

- window.crypto as an interface over libcrypto

- window.localStorage and window.sessionStorage as an abstraction over temporary files and an in-memory key/value store, respectively

- window.indexedDb for an in-runtime DB à la Erlang Term Storage or MUMPS

- window.opener, window.parent, window.open(), window.postMessage() for inter-process communication

There's a lot to work with here if you get creative.


if i have to think about what the browser-equivalent operation would be when i'm working with a server-side API, i'm going to use a different language entirely.


Sure, but then why would you use Javascript in the first place? The situation deno is trying to address is that JS developers need to memorize two standard libraries, one for browsers and one for node. You should be able to work with one standard library like you can in most other languages.

The first attempt at this was to port the node standard library to the browser via browserify and the like. This tended to create gigantic JS build artifacts and frequently led to runtime errors when developers used an abstraction that just had no equivalent in the browser (like saving something to disk or opening a separate process). The browser is definitely the lowest common denominator of JS runtimes, even if there are a few browser APIs that just don't make sense in a server environment (like window.history)


im all for standardizing overlapping functionality, but not standardizing just for the sake of it.

node 17 is bringing the fetch api to core which to me makes perfect sense. theres definitely no point in node creating their own version of fetch. and fetch is useful on both the browser and server, given that fetch is an easy way to facilitate a client-server connection over http/s. perhaps the same applies to window.crypto.

i think you start to lose the benefit of standardization when: 1. half the api gets repurposed to try to fit a (in some cases, completely subjective) counterpart 2. functionality isn't 1:1


You should be branching on the existence of `document` to discover if you are running in a DOM context, not `window`. Using `window` to branch SSR and CSR code will also inevitably will give false positives/negatives in workers and worklets when used in libraries.


Window is not available in a worker context. What's a situation where you have a window available and no document?


Deno


That would mean Deno is not following the web specs.


There’s better ways to do this - personally I’ve found the approach taken in this tiny lib very reliable: https://github.com/flexdinesh/browser-or-node/blob/master/sr...


That just seems worse than what npm/yarn have. Despite Node's flaws, at least those have a hash check on the downloaded package against what's in the lock file.

EDIT: I was wrong. See @spoiler's reply.


So does Deno: https://deno.land/manual/linking_to_external_code/integrity_...

It's not part of a package manager, because there isn't one, though.


Ah. I stand corrected.


> import 'https://cdn.jsdelivr.net/npm/marked@3.0.7/marked.min.js';

Doesn't this mean your code won't run if the website https://cdn.jsdelivr.net goes down?

In Node.js you download modules you need, to a local folder under you dev-folder. So you can run and develop and test your code whether you have internet connection or not.



Is this really a better workflow? I mean it works, but now you do not have a central list of all your external dependencies. Sure they recommend you just do all your imports in a single file and re-export them. But that sounds very tedious and at the end of the day to what advantage? I'm really struggling to see it.


For scripts, it's pretty sweet to be able to import dependencies directly via URL without needing to do an `npm init` and `npm install`. For larger projects (like my static site generator), I didn't find it tedious to import from a central deps.ts file, although I admit that importing from a relative path like '../../deps.ts' is not as quite as nice as importing by package name like in Node. I'm OK with that the tradeoff, though, especially since it matches the way imports work in the browser.


What about transitive dependencies?


My question too. I assume Demo downloads one component and then all its nested dependencies. But now every nested dependency can come from a different server, just like the components you refer to can come from anywhere on the internet.

I wonder if this could be a security issue. It's hard to know who has control over all those nested repositories, and who keeps a look on them to ensure they are not maliciously modified? Is anybody checking on cryptographic signatures of them?

Only asking because I don't know much about Demo.


Locked modules have their hashes stored, so if something does change, you'll know right away. So will anybody else who got the source from you with lock.json included.


So, this is kinda unusable for any corporate setting where all dependencies, including transitive ones, are downloaded from a private server.


Er, why? You'd just set up an import map to point it at your private server.


> You'd just

Ah. The enievitable "just".

And how exactly do you set the import map to point to the private server for the transitive dependencies?

Deno's own docs don't bother with such trivialities and show a very toy example, of course, https://deno.land/manual/linking_to_external_code/import_map...


Rather than importing and re-exporting dependencies, Deno supports import maps:

https://github.com/WICG/import-maps https://deno.land/manual@v1.20.1/linking_to_external_code/im...


It seems like a feature that would be better implemented at the IDE/editor level. Paste in a URL and it asks you if you want to add it to package management.


Well... CDNs aren't supposed to go down, but if they do suffer an outage, Deno caches dependencies locally.

If that's still not enough, you can also just download the JS file and import from a local path or use the vendoring feature.


Not sure that “CDNs aren’t supposed to go down” is a very reassuring point, nor one that Deno’s authors would make in its defence.

Deno’s vendoring approach isn’t like a stopgap in case third party servers go down. It’s the ’right’ way in Deno, and is another brilliant fix to Node’s approach.

Node always makes you go through a formal step to install a dependency - and then you’re still reliant on npm’s CDN being online during your builds anyway. The only way to break that build time dependency is to use a clunky, nonstandard hack to check in your node_modules, like yarn 2.0 or pnpm, which then takes you right off Node’s “golden path” (such as it is) and creates loads of incompatibilities.

Deno’s approach is the best of both worlds.

1. With Deno you can import direct from a URL with no formal install step, which is great for prototyping. The caching engine makes it fast without a need for every little toy project to have to have a giant node_modules folder.

2. Vendoring your scripts provides a dead easy, golden-path way to make your project completely non-dependent on third party during build, ie gives you same advantage as using Yarn 2 or pnpm in Node, but as a first class citizen of the runtime, so it’s efficient and standardised and will therefore not have tons of ecosystem incompatibilities.


> you’re still reliant on npm’s CDN being online during your builds

Why? I'm building my program out of components in my node_modules -folder. Why do I need internet?


I mainly meant CI/CD builds. Or any future build on a new machine.


I cannot help but root for Deno whenever it features on here. It offers such a wonderful potential future for JavaScript. It's not fully-baked yet, but man, it solves so many problems in one fell swoop.


It's so good I worry about the Betamax effect


What betamax effect? Didn't it fail since the cartridge design was worse and also held less tape than vhs?


No. It failed because JVC saturated the market with incompatible and inferior VHS before Beta could get there.


It failed because the porn companies all went for VHS (for licensing reasons iirc).

Porn companies have decided basically every format war (including pioneering streaming).


I can't wait for the porn industry to make the scene with pull requests to Chrome affecting codecs or CAs. They have been hesitant to flex because they're so vulnerable to taboo but I'm confident that's a problem with a solution they will eventually find.


My understanding was that Betamax had inferior licensing terms, and that it was Sony’s greed that sunk Betamax.


Seems likely enough, however nothing could make up for being late to market in this case.


Technology Connections has some decent videos on the topic.

https://www.youtube.com/watch?v=FyKRubB5N60&list=PLU_9ndwi4C...


> You can log and style CLI output the same way you do it in the browser’s developer tools: using what you know with console.log

> console.log("%cHello World", "color: red");

TIL, neat! If anyone else did too - https://developer.mozilla.org/en-US/docs/Web/API/console#sty... looks great

Using alert/confirm/prompt for CLI tools is nice too, makes a lot of sense to build that in and use the web APIs for it.


Take a look at the developer console when visiting www.facebook.com


That’s great, thanks! I was wondering what that `%c` was.


console.trace looks sweet. Thanks for the link.


This is nice, until its not. I have spent a fair amount of time over the last several years trying to make node act like the browser, or vice versa. It's doubly confusing to juniors who don't understand the difference between a language and a runtime. alert() looks like a standard function and should be specified by the ECMAScript Language Specification. But its actually specified by the HTML standard, because its Window.alert(), and Javascript's scoping rules let you call it simply as alert(). Trying to explain what's JS and what's Web is really difficult, and design decisions like this only muddy the waters.


I think it's even worse than that, because by being "webby" here Deno has committed itself to alert(); multiple browser vendors (including Mozilla) have advocated removing alert() from the web platform.

(They want to do that because alert() "stops the world" by blocking the main thread event loop, and because they make it easy for a site to post a message that appears to come from Chrome itself, or from another website.)

https://groups.google.com/a/chromium.org/g/blink-dev/c/hTOXi...

> We’re on a long, slow path to deprecate and remove window.alert/confirm/prompt and beforeunload handlers due to their role in user-hostile event loop pausing, as well as phishing and other abuse mechanisms. We’ve been successfully chipping away at them in various cases, e.g. background tabs, subframes with no user interaction, and now cross-origin subframes. Each step is hard-fought progress toward the eventual goal, and we should consider carefully whether we want to regress, even in an opt-in manner.

Rich Harris has a good blog post about this. https://dev.to/richharris/stay-alert-d


Removing alert is nonsense, and I don't think they will ever do (without breaking a ton of websites that use it). Why as a developer should I have to make a modal with HTML+CSS+JS to show it to just have a prompt that inform the user about something?

Ok, alert is ugly, but for a lot of situations (= industrial software) it's perfectly acceptable.


Why as a user should I have to endure a website with an intrusive user of alert? I don't care what happens to websites that use alert, I never want to see my browser get locked up because of lazy devs wanting to use alerts.


Alert doesn't lock your browser. The developer can spend a lot of time and effort to make an even more intrusive and annoying custom alert.

Why should I as a user have to endure instagram and twitter blocking content with a delayed login prompt that cannot be canceled? Because they can. Browser APIs or not malicious developers will be malicious.


Yes it does. You can't click on the URL bar and just navigate to a different site in chrome while an alert is displayed.


You can in firefox (just tried from the console).


I can.


Alerts haven't locked up the browser since the Bush administration.


Frrl free to update https://stackoverflow.com/questions/8825384/alert-is-bad-rea... with new information.


That question has been closed and locked since Obama's first term


Browsers are well equipped to deal with alert() abuse. You can usually block them after the second one.


That only because the implementation is poor. It doesn’t have to be intrusive.


Sort of. This can obviously be non-blocking and redirected to whatever stdin-ish receiver you prefer:

  alert('whatever');
This cannot:

  const answer = confirm('A querstion?');
  const value = doSomethingWithUserInput(answer);
  return doSomethingWithValue(value);
The API depends on it blocking the main thread, and removing that expectation is effectively just as disruptive as removing the feature entirely: non-abusive usage would have no reason to call this function if not to block for user input. Introducing some magical suspension of the stack like async/await or generators would break assumptions about the entire concurrent execution model of the language (the inverse problem of “colored functions”, infectious async/await etc).

The best thing they can do without removing the API is provide an escape hatch, which most browsers already do when alert &co are called multiple times in quick succession.


Some devs want to use the native UI, but this isn’t okay either apparently. So are you advocating that every dev should implement their own alert modal?


>beforeunload handlers ooh no, how will sites tell me that my free offer will expire if I navigate away?

But on the serious side, this is useful for many antiquated apps (e.g. government stuff) that break tons of stuff if you attempt to re-submit a form, use the back button, etc, so I see that causing a ton of problems.


Yeah, especially after having had it for years, business leaders just do not understand why the browser would remove such a useful feature. And then you get questions like “So? Can we just stick with IE 10 then?”

I still haven’t found a way to consistently send a request while the tab/browser is closing.


visibilitychange and sendBeacon

Why would you need it to be consistent? Depending on this for anything other than analytics is bad design. It’s like catching sigterm on desktop.


Yes. I work on applications that run for a long time in the browser with many subframes. The beforeunload event helps us clean up things and manage our applications' states better.


Rich Harris actually disagrees with the browser vendors decision to deprecate alerts. From the article you linked:

> We can't normalise the attitude that collateral damage is the price of progress, even if we accept the premise — which I don't — that removing APIs like alert represents progress. For all its flaws, the web is generally agreed to be a stable platform, where investments made today will stand the test of time. A world in which websites are treated as inherently transient objects, where APIs we commonly rely on today could be cast aside as unwanted baggage by tomorrow's spec wranglers, is a world in which the web has already lost.


This has also unfortunately been my experience, even when working with developers that have several years of experience writing JS.

There seems to be a big gap in knowledge when it comes to boundaries between the language specification, runtimes, and innate abilities.

People consistently confused about why they can't use JSX without a build-step, not understanding that JSX isn't "real", why doesn't fetch() work in Node, why does code in my Next.js app break randomly (server vs client render context).

Lot of pain in trying to explain to folks that a browser and Node are two different universes that happen to speak the same general language.

There's too much magic in this ecosystem and not enough emphasis placed on taking the time to understand the tools you work with, IMO.


> Lot of pain in trying to explain to folks that a browser and Node are two different universes that happen to speak the same general language.

That's exactly why it's so good that Deno is trying to close the gap. More consistency = fewer surprises


speaking of next.js, it's always fun to watch web tech come full circle back to "maybe we should render this stuff before we send the bits, since servers and CDNs are real real good at sending bits real real fast, instead of making everyone render the same static stuff a billion times per day client-side". https://en.wikipedia.org/wiki/Movable_Type was generating static blogs back in 2001, and that's still a great idea 20 years later. I'm glad that "modern" JS thinking has finally caught up :)


One of my tasks at work is to support a website that was built 20 years ago. We keep talking about updating it to modern web standards. It sounds like, if we wait a little longer, our 20 year old site will be using modern web development techniques already ;-)


The javascript community is starting to move away from that now to just using server side rendering (with hydration). Static site generation works fine on smaller websites but gets really slow as the site grows.


I know it's probably not universally the best approach, but I think that with sensible blocks for content and Incremental Static Regeneration, even sites with many many pages can be statically generated very quickly.


JS is the only programming language I know of where people read zero docs and still decide they know how to write it.

They wouldn't dream of this with C, Java or even python (even though these probably work exactly like other languages they know -- unlike JS), but for some reason perfectly smart developers make this crazy decision all the time.


> They wouldn't dream of this with C, Java or even python (even though these probably work exactly like other languages they know -- unlike JS), but for some reason perfectly smart developers make this crazy decision all the time.

It's because they can. You can cobble something together and it works or seems to work. It may fail later down the chain but browsers accept all sorts of hackery, because they have been historically doing this for a long time. Before there was Javascript you could write broken HTML, nest elements wrong and browsers still figured out a way to deal with it. The expectations of half-assed solutions has always been there on the web.


> why doesn't fetch() work in Node

Actually it does now! Or it will soon.

https://fusebit.io/blog/node-fetch/?utm_source=www.google.co...


Thus makes me feel less junior than I sometimes feel. I may not have a CS degree but at least I understand a good amount of this stuff.


I sometimes think that a CS degree is part of the issue.

The things that make JS unique aren't even mentioned in a lot of degree programs.

Larry Wall famously said the three virtues of a programmer are impatience, laziness, and hubris. Not learning JS is at the exact intersection where people do all of these at the same time and meet with epic failure. People seem to think to themselves, "I learned Java and this looks like Java, so it must act like java" and so they run down the completely wrong path.


It doesn't help that JSX feels like it should be a part of the language. I'd really expect the main programming language of the web to have first-class support for HTML.



There is a TC39 proposal for a standard library[1]. One of the ideas reserves a URL scheme built in modules. So to import from a language defined standard library you would use the "js" scheme (import Temporal from "js:temporal") and importing from the runtime you could potentially use e.g. "web" scheme (import AudioContext from "web:audio-context").

I really hope we will have something like that. It would both give us the option of never using magical globals and make the distinction clear whether the module comes from the language or the runtime. Although I can see a case where it could get annoying if you are writing for multiple runtimes and need to get e.g. fetch (import fetch from "runtime:fetch").

1: https://github.com/tc39/proposal-built-in-modules


> I have spent a fair amount of time over the last several years trying to make node act like the browser, or vice versa.

And the right thing to resist doing either. It's to think about what "services" your program actually needs to be able to function and then to isolate those parts from the rest of your application by putting it behind a well-defined interface. In other words, the best way to fix the incompatibility problem caused by API mismatches is to never make the mistake of coding directly against the host platform's APIs to begin with. Trying to do it with compatibility shims to make one platform look like the other is a fool's errand. You end up running around trying to achieve parity with a mammoth API surface area (which might never have been especially well-designed to begin with...).

Let's say you're implementing a program similar in scope to UNIX's file(1). For our example, though, suppose you're really only concerned with text files and specifically whether a given text file is using DOS-style CRLF line separators or UNIX-style LF terminators. You really only need two capabilities: a `read` operation to get the contents of a file and a `print` operation to show the output to the user. Does your program care whether that `read` is happening with a Web standards-backed FileReader or NodeJS's proprietary `fs` module? There's no reason it should. Design the best "system" layer that makes the most sense for your application's needs.

You can see this implementation strategy in the way the TypeScript team wrote the code for the TypeScript compiler itself when it was made public. Even in the early days, it could run on multiple platforms—including NodeJS, JScript/Windows Script Host, and various browsers (old or new)—because it didn't overly concern itself with anything except its real job of lexing, parsing, and type-checking its inputs—wherever they came from—and then writing the output in a platform-agnostic way.


> never make the mistake of coding directly against the host platform's APIs to begin with

I regularly read this advice and I think it's bad in general. If you only ever work against an abstraction, you lose time and might get a worse end result than the one without the abstraction. And in the end, you might throw it away, if it doesn't prove successful. So move the cost for working with an abstraction into the future as much as possible, when you know the alternatives you have to abstract over.


I’m not exactly 50-50 on this, but fairly close. It depends! People talk about ‘premature abstraction’ as if spending any time abstracting up front is premature; others talk about abstractions they find generally useful in such an abstract way that it’s easy to mistake those general findings for rules.

There’s a distinct set of skills and instincts involved in recognizing a generalization of a problem, and a narrower one in recognizing matching solutions which are actually suited to the specific problem. It’s not a skill/instinct everyone has, but it’s not one that should be immediately dismissed either. And one’s recognition of both can grow and be refined over time.

I also think this kind of balanced flexibility should be something we demand as a baseline assumption of the craft:

- We’re going to have instincts no matter what, and we should be encouraged to explore them

- Some of those instincts will prove right, some wrong; often “wrong” will be grey and murky and temporally separated from trying

- When we’ve recognized the error, we should allocate time to correct it with what we’ve learned

- As this process recurses, we should assess whether our predictive senses are getting closer to reality, and adjust our risk impulse accordingly


> In other words, the best way to fix the incompatibility problem caused by API mismatches is to never make the mistake of coding directly against the host platform's APIs to begin with. Trying to do it with compatibility shims to make one platform look like the other is a fool's errand. You end up running around trying to achieve parity with a mammoth API surface area (which might never have been especially well-designed to begin with...).

It seems like you're making a good point, but I'm confused by your comment. Do you believe we should avoid coding directly against the host platform's APIs, or do believe we should avoid compatibility shims? At some point, you need compatibility shims in order to avoid coding directly against the host platform's APIs. The most that your runtime strives to match existing runtimes, the less you need to worry about writing shims to improve compatibility.


I'm referring to compatibility shims like the ones described: where you have two platforms, so you try to make one look like the other and then target that. I'm saying don't do that.

> Do you believe we should avoid coding directly against the host platform's APIs, or do believe we should avoid compatibility shims?

Avoid compatibility shims that lead to you writing your application's logic directly against platform APIs.

You recognize that you need to read a file, so you write your application logic against the simplest possible interface you can think of that would let you do that: `read`.* You don't try to emulate either platform's API. This sounds like a bad deal, because neither platform provides native support for this interface, and the other approach is tempting, because you already get one implementation of the API for "free" (on whatever platform where that API is native), and you'd "only" have to write the compatibility shim for the other one (or re-use somebody else's shim). This is a mistake. For one thing, people end up misjudging the cost/benefits of each, and for another, the compatibility shim doesn't make the program easier to understand. You end up with quirks of that API infecting increasingly more parts of the program.

* What you don't do is go off and design the best, elaborate, pluggable, most extensible abstraction layer that you can think of. You implement `read`.


TypeScript is really helpful for this because you can specify what runtime(s) you expect your code to run in with the `lib` configuration. Looks like Deno provides its own versions of libraries: https://deno.land/manual/typescript/configuration


Isn't that just basic programmer knowledge. Basic in as "you need to learn the difference between a language and the environment its in". I can run C# in Unity. The number of functions I can call is vastly different than running it outside Unity. I can run Lua in 100s of runtimes, all with different functions available. JavaScript can be used to script Adobe apps. The functions available are different than the browser.

This is just normal. Is it confusing for a new programmer? Maybe. Is it something they should learn to distinguish? Yes!


> Isn't that just basic programmer knowledge. Basic in as "you need to learn the difference between a language and the environment its in".

I don’t think the answer to this is boolean. For instance:

> I can run C# in Unity. The number of functions I can call is vastly different than running it outside Unity.

If one is learning both in tandem, coming from a background with a language with different scoping rules, knowing the distinction even exists and should be a consideration might not even show up in reading materials. Sure, it’s an important skill to learn, but it’s not especially transferable. I know how it works in some environments and still find it baffling in others where I even know how to read the code. (Example: Java treats methods as locally scoped functions; I still have no idea how to distinguish them without running the code in a debugger. But I don’t write Java! I have a limited understanding of what’s in scope reading it.)


The people that don't understand the difference between a language and a runtime always need to learn it at some point if they keep working as programmers, and context issues exist on all branches of human knowledge, even English itself has this quirk where some words mean the opposite depending on the context (these are known as contronyms).


Why is having a different API other than window.alert() a better alternative, though? That just makes it confusing in a different way; that you have to know to use different APIs depending on whether you're targeting the browser or server.


> This is nice, until its not.

What things aren't like this?


I'm simply unable of not reacting to someone so exhilarantly rejoicing you don't need a third party library for taking user input with cynicism. Am I living in a bubble, or what the hell went wrong here?


The input prompts are just one example, it's what it's an example of that's actually exciting.


One thing I have never understood about Deno is that how are you supposed to run it on a multicore machine? Since they don't have cluster module like in node there is no way to run one process for each core? If the main thread is busy running some cpu intensive task, it will then block the entire application from answering other requests.

I get that in serverless environments like in Deno deploy each instance gets one core etc but if you're running it on a VPS or just on a normal server I would want something like the cluster module like in node.


Efficiency seems to be a real priority w them. I like the simplicity in Deno Deploy also. You literally get a text box and just type in your cloud function. You don't have to install cli, set up tooling, for a quick try out.


Shameless plug. Disclaimer. I work for Cloudflare. Have you tried Workers? You can do the same in a playground [1].

You can also do the same by deploying a worker through the dashboard UI (including writing the code). Nothing to link though because you need an account. The playground is limited in what it can do because it’s not deployed. You can also use Pages to point at a repo which lets you build a website and server side code (through Pages Functions which actually runs your code in a worker).

[1] https://developers.cloudflare.com/workers/learning/playgroun...


Please purchase Deno the company before dismantling their only source of income through deno deploy--deno the tool is too nice to lose!


To be fair, CF workers is much older than Deno by a few years. I don’t see us as dismantling anything - do you blame the wind mill or Don Quixote for tilting at them? This is a very competitive space too. Cloudflare is but one player and I don’t think the ones who do well will focus just on the serverless infrastructure piece. It’s more about having a very complete product story that solves pain points better than anyone else for a price that’s better than anyone else (ie build on a more solid foundation).

Since Workers started before Rust was a thing, Deno have a mild advantage on development velocity of the runtime (C++ is more annoying in some ways). They’ve also compounded that velocity by writing most of the runtime in Typescript whereas our runtime is in C++.

However, our runtime is far more battle tested at scale which means trickier distributed systems problems solved and more efficiencies of scale (+ c++ has less of a memory hit than TS so it will be interesting to see if they can hit the memory scale needed to support many customers simultaneously). Personally I wish them the best as I think competition is always critical to force you to focus on making a better product.


> They’ve also compounded that velocity by writing most of the runtime in Typescript whereas our runtime is in C++

Can you explain more? Deno doesn't use typescript in the runtime. It is rust and pure js (for globals and wrapping bindings). I don't expect their current architecture to be any less efficient from what I know but it may have changed.


Yes you’re right. I assumed they’d use TS because they’re going through the effort of making it a first class citizen.

Doesn’t matter though because it’s the same (the only way to run TS is to transpile to JS)

https://github.com/denoland/deno/blob/main/runtime/js/README...

A good chunk of the runtime is in JS, snapshotted and loaded into each isolate. This is a greater memory overhead (and probably slower startup time) than having it written directly in Rust because the snapshot needs to be copied into each isolate. I assume Deno Deploy follows a similar model as Workers where you have many many isolates running concurrently within a process where this overhead may start to matter.

If Deno Deploy gets some scale, I’m curious how they’ll tackle. Less interesting is if they start inlining a good chunk of that functionality into Rust. It’s much more interesting if they figure out some way to improve v8 to reduce this cost. There’s some ways potentially if you could create a COW clonable isolate in v8 efficiently and it could be interesting to collaborate with them on it. But that’s hard. Maybe they have alternate plans.


Yes big fan of Workers and Cf. The only issue it was confusing tying the domain to the worker, but that was a few years ago - last I tried it a couple months ago seemed easier.


Yeah, I think the launch of our own registrar service simplified onboarding of custom domains (by default all workers are available on workers.dev domains).


Whenever someone has asked me "what would you change about Javascript" for the last 15 years, my answer has usually been "it badly needs a standard library."

The standards committees have partly advanced that and Deno is following those leads, but I've got my fingers crossed that Deno will also fill in gaps (and those will make their way back into standards committee considerations).

(My other answers have been more ... controversial, like: make [] false-y, add macros, and bring back `with`!)


Gonna guess you're a lisper ;)

Definitely it's biggest warts are around core data types - mainly the implicit-casting rules - but those are also impossible to change at this point. "Don't break the web" and all that.


More like wannabe lisper. Maybe someday I'll land that Clojure job.

And yeah, making [] false-y would probably do bad compatibility-breaking things. That's from the department of wishes more than good practical going-forward choices.


A couple of questions:

  “Make [] false-y”
Why?

  “Add macros”
Why? How?

  “Bring back `with`”
Why?


What about not making [] any kind of boolean? Let's be explicit and use [].empty() or something, implicit conversions are confusing.


> implicit conversions are confusing.

Are they? All of them? 0:1 <≠> F:T? Boolean and binary are and should be distinct, and never the twain shall meet? A bold thesis in the world of computing.

Context-sensitive meaning is something people navigate all the time. You probably do it in some software contexts w/o even realizing it (and we all do it with human language and relationships w/o thinking consciously about it).

It's most likely to be confusing if you're unfamiliar with the context, or if the context-specific rules turn out to be convoluted.

0:1 maps on to F:T or even nothing:something or empty:non-empty pretty effortlessly. Even rigorously, with the right level of attention.

Implicit can mean subtle. Whether that means confusing (or whether there's some other cognitive load imposed by some forms of subtlety) is a separate question. And probably subjective.

Explicit can be a virtue. Concision can also be a virtue, and sometimes context-sensitivity promotes that.


Not GP, but I really like [] being false in ruby and python because I often want to ask "is this variable that should hold a collection holding a collection of things, or is it empty/false?"


Why an array with zero element should be false? It doesn't make a lot of sense. And writing if (array.length > 0) is more explicit than writing if (array) anyway. Even in python, I prefer to be explicit and say if len(array) > 0 than to rely on implicit conversion rules.


It would be better if Javascript had Array.empty() instead of comparing array length. It would make things even more explicit and more convenient.


Personally, I don't like if even accepting things that are not boolean. But an empty array being false makes exactly as much sense as an empty string being false, and both should have always the same behavior.

If you want objects to behave like dictionaries, an empty object should also behave the same, by the way.


I'd argue that `if empty_list` is already explicit in Python.


[] is truthy in Ruby but there is #empty? on collection types. Rails also has #blank? which additionally works on false and nil.


[] is truthy in Ruby. Only false and nil are falsy.


Yeah, sadly zero is not falsy in Ruby.


I always found it strange that in Python, stuff doesn't get implicitly cast to string (even Java does that!), but for whatever reason the idiomatic way to check if a collection is empty is to do "if collection:", implicitly casting to bool.


Nothing gets casted to a bool in Python if statements.


Luckily, most languages have a `.empty()` method! You should use that instead for readability purposes.


And in that vein, JS has some().


You are thinking about `[].present?` I think


[] should be falsey so it can be used in an if().

I'm not sure about macros.

I kind of want to see `with` back also. The problem with it was ambiguity. The syntax could be adjusted to avoid the ambiguity. .e.g `.prop = val;` could be legal inside the block. But "why though" you ask. A `with` block makes it visually obvious that a block of code is specifically relating to getters/setters on a particular object instance.


I'd argue truthiness implicated by the presence of an object is more logical than having an arbitrary definition of truthiness depending on the data type.

If I have a cup that's empty, there's still a cup there. It's presumptuous to assume I care about the contents.


There's a presumption either way. You can make the presumption JS did for [] (and there's probably no turning back), you can even argue explicit is better than implicit (vs the value of context-sensitive concision).

But it's also not consistent with the ways that other empty but typed cups are treated ("" for the empty string, 0 for empty count / quantity, both of which are false-y). Different presumptions for different types of cups carries its own implicit hazards from context sensitivity.


In that case "" should be truthy. And 0 should be truthy too probably.


They should be, to have consistency in design. Note that if the language operated this way, checks on these types would be more explicit (though more verbose too).

Checking for presence of a number with a truthy check is a common source of errors, if 0 is a valid number.

I just don't agree at all that Python's interpretation of truthiness (or equality, for that matter) is inherently the "right" one. If I clone somebody are they equal because they have the same genetic makeup? No. Though a pretty contrived example that requires common understanding of equality.

At least from a programming perspective, I usually care more about instance equality rather than structural equality. Of course Python supports that too via the `is` keyword


Yes and yes. Ruby got it right, no question :)


Macros with a standard library would have saved us from many compatibility and bloat issues when they would have arrived before things like the class keyword.


Macros doesn't have that utility in a language with high-order functions.


A standard library, a proper date/time/timezone implementation, and robust number support (all sizes of integers and for the love of god Decimal). Every time I get excited about Deno and Typescript and all of that, I get so frustrated that 0.1 + 0.2 != 0.3 and that I need to find and vet a community Decimal library.

(I know BigInt is a thing and Temporal is coming.)


I agree a standard library for JS would be great, but .1 + .2 != .3 is the case in several languages and is not anything to do with JS.

This also is or can be the case in C, C++, Clojure, Python, and many other languages [1].

[1] https://0.30000000000000004.com


Yes... It's an artifact of floating points. My issue is that JS only has floating points and not Ints and Decimal.


There is a stage 1 proposal for BigDecimal.

https://github.com/tc39/proposal-decimal


JS has had integers since pretty much day 1 with bitwise operators. They just (from a user perspective) convert back afterward. If you were choosing just one though, floats would definitely be a better choice than integers.

It's also possible to make length 1 typed arrays of integers and operate on them if you really need. Then, as you mention there's BigInt numbers which are supported on the last few versions of every major browser except IE11.


Deno is a joy to use server-side. Though as soon as it's used in conjunction with even something simple like minified code, you're forced back to node/npm.


If you want to bundle and minify JavaScript code, you can use esbuild:

  import * as esbuild from "https://deno.land/x/esbuild@v0.14.25/mod.js";
If you want to bundle and minify SCSS, you can use dart-sass:

  import { useDartSass } from "https://raw.githubusercontent.com/MarkTiedemann/deno-dart-sass/0.1.0/mod.ts";


How are you using Deno with minified code? Do you mean importing a minified script like https://cdn.jsdelivr.net/npm/react/cjs/react.production.min.... ?


I meant bundling your code for production.



Why would you need to do that for code running on Deno? It doesn't need to be sent over the wire.


Obviously not for server code. Example: serve 'src/app.js' uncompressed during development, and 'assets/app.min.js' during production.


Just wish it had some more host-level access to get to web cameras/usb devices and serial ports.


Yess! Take a look at https://deno.land/x/webusb


I'm sorry but all those examples and Deno's documentation in general should be in JavaScript. I respect the devs right to choose which ever language they want, but Deno seems to want to be the standard bearer for the power of scripting and web standards [1]. If that's the case, then they are causing more harm than good by focusing exclusively on TypeScript, which isn't a language as much as a set of macros on top of JS. TS should remain a niche for those projects that can benefit from type safety, not promoted as some sort of better alternative to JavaScript, because it's really not.

1. https://deno.com/blog/v1


> not promoted as some sort of better alternative to JavaScript, because it's really not.

This leaves me skeptical if you have extensive experience with Typescript. It's way more than "oh this is a number not a string." It's "you forgot this property on an object's return type that you built from a response value" or "your Redux reducer doesn't handle all of the possible action types so it will crash at run time"


Deno attempts to provide a standalone tool for quickly scripting complex functionality.

That's from Deno's home page. What I'm talking about isn't whether TypeScript is useful for some projects, as that has been proven beyond question. What I'm saying is that as a tool for "quickly scripting", as claimed by Deno, it is unnecessary and counter productive to JavaScript as a whole.

My point is that the Deno project is a high profile JavaScript engine which, in my opinion, is misguided in their end-user focus on TypeScript and I think it's a shame.

Devs are always searching for the "right" way to do things, and can be easily convinced to do things like misuse "const" because some pedantic fool convinced them it was sorta like type safety, and therefore not using it was "baaaaaaad". And the sheep followed and now const is used everywhere, despite the clear and unambiguous intended functionality of the feature's designers, who added it as a way of tracking, incredibly, constants and nothing else.

It would be nice if we saved a generation of devs another debacle like const, NoSQL databases and UML.


How are NoSQL databases "bad" and should in general? The term "NoSQL" in itself is very inclusive and might include databases such as redis, elasticsearch or apache cassandra.a

It's important to choose the right tool for the job and SQL/ relational DBs fit many usecases but are not always the right choice. Especially when handling massive amounts of data, that can fit into structures such as wide-column stores, databases like scylla or columnar storage with clickhouse, can help massively.


NoSQL is a good solution for the 1% of projects that need it. It sounds like you had great success with it, congrats.

Otherwise, it just ends up being a big flat in-memory table with a badly implemented version of SQL welded on top, written by developers who never bothered to learn SQL or database management in the first place. Then a bunch of fad followers jump on the bandwagon, and before you know it, we have a decade of idiots blathering on about "web scale" technologies.

That's the debacle I'm talking about. Millions of man hours wasted.


There is zero practical downside to using `const` whenever possible, and my (Fortune 100) team's backend has been using a NoSQL database for years which has saved us an absolute ton in hosting costs. Hardly a "debacle."

Our solution is the right way to do things... for ourselves. Sorry that it upsets you.


"your Redux reducer doesn't handle all of the possible action types so it will crash at run time"

And what are all these dr. Strange multiverse possibilities you speak of? Any semi seasoned JS dev has spidey sense for watching out for null and undefined, and generally you should know the type of what you are returning. Is there that much variability in what you are dealing with? If it’s an array of objects, that’s not hard to hold in your brain, just watch out for empty arrays or null/undefined items.


> Is there that much variability in what you are dealing with?

Yes.

I don't mean any disrespect but again it sounds like you really haven't tried Typescript.

Spidey sense can, often and will fail.


> Any semi seasoned JS dev has spidey sense for watching out for null and undefined

That comment reeks of the same biases that C/C++ developers make when they admit that a majority of C/C++ bugs are related to hard-to-detect pointer/memory safety issues, but claim that they're a good C/C++ developer because their own code doesn't have such bugs, despite the bugs by their very nature being hard-to-detect.


I am not a web dev at all but having something for automatic verification (even if limited) sounds more reliable than the variability of any given dev's "spidey sense".


It’s essentially a subjective question, but FWIW I disagree. I’ve written quite a bit of JS and quite a bit of TS, and I find TS so much better. I’m more productive, generate fewer bugs, and have more fun when writing it.


This reads like you’ve never written a serious app in either JavaScript or Typescript. Willingly using JavaScript over Typescript in 2022 is insanity. There’s a reason Typescript won out over Dart and Flow - and why Javascript preprocessor languages and type checkers like Typescript even exist. It all boils down to one incontrovertible truth: JavaScript really really sucks.


Some part of me wonders if they wrote some clear optimizations for defined types that translate easily to Rust. I could give two shits about types honestly, but I’m also one of those weirdos that keeps function arity low, and object definitions concise.

How did people write semi elegant Ruby or python all these years, why wasn’t there such a massive push for types in those languages? Most likely because backend people chose the backend language of their choice, but on the frontend they detested JavaScript (you have no choice, you must JavaScript you anti authoritarian shit heads) so much they had to drown it with some kind of ketchup to make it edible (Typescript).


After using typed backend languages like Rust and even Go (which is painful in its own right, in my perspective) I would hate to work again on a big Ruby or Python codebase. They feel just as disgusting, to use your metaphor, as using plain Javascript instead of Typescript.

> why wasn’t there such a massive push for types in those languages

Are you just choosing to ignore all of the history of type checking Python and Ruby?


Look, this is going to back and forth. Take a time machine to pre Web 2.0 and explain to everyone why OOP programming sucks. I’d dare you to take it off your resume. But we’re here now right?

It’s not hard for me to imagine the reversal of this trend inevitably where everyone goes ‘the fuck are we writing all these verbose types for this dumb web app for?’.


I really don't see what the back and forth is. Types are not inherently a fad - but there can be people who promote them with fanaticism as a cure-all or for problems they can't solve. There were people skeptical and critical of OOP when it was popular. I don't believe that OOP is inherently a fad either - it has its place. Different paradigms just get caught in the windstorm of fad interest.

> [why] are we writing all these verbose types for this dumb web app for?

You can take anything to the extreme, but types are a zero risk, low effort investment that has a quick return. If someone over-types something it's hardly a problem compared to an over engineered OOP codebase.


The problem with OOP is that there is not a single definition.

There are a bunch of people claiming this and that are OOP. To me OOP is encapsulating a mutable state inside a dynamic namespace (an object, an instance of class), with functions that can access the state and the ability to inherit / extend namespaces.

And I absolutely don't need it, I don't agree with the view some things are better done with OOP. Even gaming or GUI programming, domains typically considered to be the best for OOP, turned to ECS (which is very functional) and Elm style APIs.

Going back at OOP: I consider mutable state to be a necessary evil to be limited as much as possible; inheritance makes it hard to track what code is being run.

The best practice for writing OOP revolves around limiting mutable state and inheritance, so why even bother with OOP in the first place?

I can have encapsulation with namespaces / modules in functional languages as well. I don't need much else and I can live happily without `this` and using composition instead of inheritance.

OOP was the first marketing wave focused at developers and it's gone.


I guess thats my main deal breaker. It’s simply another foot gun to allow people to over abstract. I have no issues with modest typing, it’s nice, it’s clear. I have issues with what entropy inevitably does.


So why are we in disagreement then? I am not advocating fanaticism, you simply seem to have implied and inferred that.


Sometimes it takes a discussion, I’m not in the business of upvoting or downvoting :p


You're a bit late to the party...Web 2.0 was this phase.


Deno can handle plain JavaScript just fine. In what way is targeting TypeScript harmful? How do you differentiate between a project that can benefit from type safety and one that cannot? Why should Deno's documentation be in JavaScript?

What does is mean to "respect the devs right to choose which ever language they want"?


The idea of TS is not bad, but they had to forbid any and other ways to escape the type system. I get tons of runtime errors on my TS projects as well because someone down the chain decided they couldn't be bothered with types.

Also, having to do type definitions for modules which don't have types is a massive pain. Overall, I appreciate the type safety, but I don't think TS delivers on that promise and I'd rather use a real typed language.


> You Might Not Need NPM

If it's like every other JavaScript project I've been on since 2016, it's going to actually require thousands of JavaScript packages and doing everything with it is going to be slow as molasses even on high-end developer workstations.


Deno has a standard library that wants to be 'batteries included' so you don't have nonsense like a third party package for left padding a string (deno supports string padStart natively for this for example).


Sorry but padStart/padEnd are part of EcmaScript 2019 (or near that year). Deno did not implement it. (AFAIK)


Both are implemented in deno per MDN’s compatibility table.


Deno does not have its own javascript implementation, it uses the V8 javascript engine, and V8 happens to implement padStart and padEnd.

V8 was originally created as the javascript engine for Chrome, but it is used in many products now, including Node.js and Deno: https://v8.dev/


That’s true for padding. Deno is still doing a lot to standardize on the web platform APIs to be part of the base runtime available and that’s work they’re doing on their own and not for free as part of v8. This is similar to what we do at Cloudflare and I think there’s similar efforts within Node in 17 for what it’s worth.


Everything supports padStart natively, for almost half a decade by now.


I'm referring to the left-pad package on npm that infamously ignited a huge controversy years ago and started some of the cracks in the entire ecosystem that are growing larger and larger today. For more context: https://qz.com/646467/how-one-programmer-broke-the-internet-...


Point is, Deno is doing a lot to bring feature parity with expected Web APIs (which are not part of JavaScript the language yet), and they are building a single export standard library, but padStart is not part of their (direct) actions.


One benefit of deno on this front is that it _only_ pulls down the js files you actually need. With npm, even if you only import one file from a package, every single other file will also be downloaded to your computer. And probably some test files, a bunch of package-lock files, maybe some branding images. Oh and the same for all the packages it references, too. The storage/download time savings of getting just the js add up a _lot_ more than I expected. I've toyed with deno on a few projects, and my entire deno cache is smaller than any single node_modules directory.


Deno is also creating a standard library, so that too should reduce the amount of external packages you'll need.


The Deno standard library is not bundled, but separately fetched as a dependency for each embedded file.


That's a pedantic argument, the deno standard library doesn't depend on any other library except itself and deno's core. You can know if you depend on it that you don't have dozens or hundreds of other dependencies, unlike pulling in a random npm package.


This is the real power of Deno!


Coming from PHP to Node.js back in the days it felt like quite an improvement.

In PHP reinventing the wheel was one of the biggest issues.


I think the problem was/is a generation of inexperienced developers who were brought up on importing packages for padding a number. Thing that always struck me was how easy and free it was to publish.

Kinda fun to imagine a packaging system where you have to pay a very small crypto fee to publish. I'd bet less packages but more useful stuff on the whole.


Deno is the best thing to happen in web middleware, potentially over its entire history. It's like someone solved those problems over many years, took a break, then started from scratch to do it properly.


I do wonder about a future where JavaScript gets non-enforced optional type annotation syntax[1] which is and if said syntax will be slightly incompatible with TypeScript. That would be a little awkward for Deno, wouldn’t it. However I hope that if JS gets type annotation syntax that it would be a strict subset of TypeScript—or at the very least future compatible—for this very reason.

1. https://github.com/giltayar/proposal-types-as-comments


Whatever the final syntax will be, if it is part of the JS standard, Typescript will probably adopt it at some point.

(And its team will try to push the standard towards being as close to the current Typescript as possible)


TypeScript `tsc` can already type check JSDoc type annotations, which is very useful: https://www.typescriptlang.org/docs/handbook/jsdoc-supported...

Edit: I should perhaps listen before I speak -- the proposal linked above talks about JSDoc and why it does not fulfill their needs, which may be the case. I have not run in to these limitations myself.


I agree on the premises, it's nice to have an open standard - in practice the API available on the web is usually a downgrade in developer experience.

Think about the old node library request vs fetch, require vs import/import(). I hope deno has Buffers and I won't have to use atob / btoa.


IIRC Buffer in Node has been a wrapper around Uint8Array for a long while, which is in the box in both, and the browser.

https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...


I love the idea of deno over Node. Is it getting traction though?


I've started using it for more shell scripting... since it's easier to run without worrying about `npm install` for dependencies.

Also using it where I'm not tied to Node by a specific dependency. There's some shimming surface towards getting Deno able to run node packages and vice-versa as well as ES-Module packaging of node repositories for use directly in the browser or deno, to more or less success.


Not sure about Deno, but alternative runtimes for javascript on the server is popping up a lot of places now. All of the edge hosts are using the V8 javascript engine with their own wrapper.


I hope deno succeeds in replacing nodejs & npm. I don't want to deal with NPM and all the shenanigans that comes with it.


Deno is excellent I really regret not using for my most recent node project


Can Deno now be used with MS SQL?


Not one that's up to date and maintained, as far as I can google.

But there's a proper sqlite driver now using the new ffi api support in deno. This didn't exist last time I tried googling db support in deno.

https://github.com/denodrivers/sqlite3


And Zendaya is Meechee.


Is there a solid coffeescript type thing for deno?


Deno is just a javascript runtime and standard library, so in theory you can already use coffeescript.


Deno is webby... But who cares?

It's solving a problem no one has.

Standard library improvements are always welcome, though.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: