> The Bay Area continues to lose jobs across high-income sectors (-0.4% YOY), driving modest overall employment declines. These job losses have slowed compared to a year ago but remain negative YOY. Despite generating substantial spending and wealth, the AI-driven tech boom hasn’t added meaningful employment to the region.
That is because there has been an absolute massacre in biotech in the bay area. Between tariffs (higher COGS), chaos at the FDA, cuts to NIH funding for basic and translational research, and competition from China biotech ventures are getting squeezed from both ends.
IMO the problem with Firefox is that custom search engines in Firefox can't use POST requests, even though it's supported. You may want to check Mycroft Project [1] out for that.
Kagi uses Yandex to improve search results for relevant queries. That's all they do.
As a company providing the service of web search, Kagi should do whatever it takes to improve search results. I imagine Yandex is the biggest and most complete index of Russian-language content - not using it would make the search results worse. The fact that Kagi still cross-references other indexes and allows users to downgrade specific results provides a check on propaganda content.
It's OK to have an opinion, and it's OK to dislike Kagi because it doesn't have the same opinion. It's wrong to mischaracterize what Kagi does, using wording that strongly suggests actions way more nefarious than giving a few dollars to a Russian company in exchange for some (anonymized) API calls.
There is nothing about collaborating with Russian government in the post. They merely make use of a Russian-based search engine to provide better search results. I won't argue if this is good or bad, but from your statement it sounds like they collect all the users' data and sell it directly to Federal Security Service.
That's as shallow as "<anything Russian> is Russian government". Russian government certainly established strong leverage over Yandex (and every other major business for that matter), but they don't exactly own the company.
Such blanket statements really don't bring anything to the table.
PS: I think you might be confusing Yandex with VK. VK are known to be loyal to government and provide users' data to law enforcement at a whim, without proper procedures.
Ok, so they say Yandex queries are 2% of their costs. Kagi currently has 57341 paying members. Even if you assume that every single user has a $25/month Ultimate membership and that 100% of that money goes towards search API (which is both obviously not the case), you'll get that Kagi pays about $29000/month to Yandex. According to World Bank, the corporate tax burden in Russia is about 46% of profit, so if you assume all of this money as profit, Kagi pays $13000/month to the Russian government. In reality, Kagi spends MUCH less than $25/user on search API costs, and Yandex doesn't claim all of that money as profit, so the real figure is closer to $1000-5000, maybe even less. And most of that money goes to local governments or pension funds anyway. So yeah, Kagi may pay likeabout 1-2 cents a month from each user's subscription to the Russian government in exchange to massively improving the quality of their service. That's not nearly enough to call it "proudly collaborating with the Russian government", and I can guarantee you that MUCH more of your money goes to Russia every month in other ways. In fact, if you live in Europe, you probably pay more to Russia in your own taxes.
> Our role is shifting from writing implementation details to defining and verifying behavior.
I could argue that our main job was always that - defining and verifying behavior. As in, it was a large part of the job. Time spent on writing implementation details have always been on a downward trend via higher level languages, compilers and other abstractions.
No trying to minimize the efforts of people who do this as real jobs or influencing - you do you. However, generating fake message screenshot, sending unsolicited messages etc? And the winner is the one who gets the biggest rise from the consumer, authentic or not.
Distribution is hard, I get it. But isn't this the equivalent of everyone just rocking up to the village square in the most outrageous costumes and screaming into the megaphone?
The alternative is to build the audience first, then sell things to it. If you don't command an audience, you must use someone else's, usually at a cost.
You don't have to be the loudest and most outrageous if your product is great and you speak the right message to the right audience.
I think this + node:test makes Node.js a pretty compelling sensible default for most things now. Running things with `tsx` was such a QoL improvement when it happened, but it didn't solve everything.
Runtime type assertion at the edges is mostly solved through `zod` and tools like `ts-rest` and `trpc` makes it so much easier to do full-stack Typescript these days.
This. It's 2025 and the node ecosystem is finally usable by default!
ESM modules just work with both Node and Typescript, Node can run .ts files, and there's the a good enough test runner built in. --watch. The better built in packages - `node:fs/promises` - are nice with top-level await for easier async loops.
It took a while to convince everyone involved to just be pragmatic, but it's nice now.
This is great to hear, but perhaps comes too late for people like myself. Node.js has been by go-to platform from around 2014 until last year. But around September last year, I found myself thrust into the .NET ecosystem (due to a client project). Within a few months, I realized that it too, had finally become usable by default (unlike the last time I tried it, when it was too tightly coupled to Windows). In fact, it felt like what Node.js would be, if it had strong typing built-in, and had a good standard library that eliminated a lot of the module management and churn. I'm now finding it hard to return to Node.js.
I can second this experience. I arrived roughly 10 years ago, right in time to see netcore1.0 emerge. Been onboard even since. You should absolutely check it out. The compilation story (native aot) is what I'm currently most excited about it.
F# is a great FP language that runs on .NET and there's a growing field of FP proponents working in C#, sort of a Trojan Horse situation trying for a best of both worlds (easy onboarding for C# junior devs, but deep FP options thanks to things like C#'s clever LINQ syntax). LanguageExt is a big part of some of those ecosystems: https://github.com/louthy/language-ext
What's the story with supporting CommonJS libraries? I've tried to update many projects to ESM multiple times over the years, and every time, I ended up backing out because it turned out that there was some important upstream library that was still CommonJS - or even if we fixed those issues, our downstream NPM consumers wouldn't be able to consume EJS. So then you have to go down this rabbit hole of dual compilation, which actually means using something other than tsc.
With "type": "module" there's very few reasons to do dual compilation unless you have very conservative downstreams that hate promises and async/await, and even then there's mitigations now (sync `require()` of async ESM).
It's been a while since I've had a trouble importing an upstream CommonJS library. Often it is easy enough in the rare cases where upstream is particularly old/gnarly you can vendorize an ESM build with rollup or esbuild.
That said, CommonJS is also now a strong maintenance signal for me and if a library doesn't have recent ESM builds I start to wonder if it is at all well maintained and not just a pile of tech debt to avoid. JSR has started becoming my first place to search for a library I need, ahead of NPM, because of JSR's focus (and scoring systems) on ESM first and good Typescript types.
I can’t help but think that none of these would have happened without Deno doing it first. It was basically the pragmatic Node before Node started to get reasonable.
Watching NodeJS fill in these gaps the last 5 years or so has been great, I strongly prefer using built-in stuff as much as possible now to avoid bloating the modules and becoming dependent on a thousand random people being good-stewards of their packages.
Well, only goes to show how different everyone's experiences are. I guess I've had the opposite one: Node+CommonJS was something I was extremely comfortable with.
The slow adoption of ESM by Node, with many compatibility missteps, the thousand papercuts around TS, the way frontend-centric toolchains kinda-sorta paper over the whole thing, letting it fester, and the way people have been acting like things are ready for primetime for over a decade while diligently testing them in production, all of that came later. To the point of having me wondering how did people work with TypeScript before ~5.4 - though evidently they did, and had few if any of the same complaints!
Baffling but IIWII. Anyway, only this year I discovered a pure `tsx` + ESM workflow had become viable OOTB, to no little surprise. I perceive that as the toolchain becoming unfucked just as randomly as it became fucked when Node 16 did what it did. Not that it didn't take a couple years for TS to "invent" the right compiler flags that it took to tell it to stay out of the runtime's way, too.
So a good year overall. Hope they don't break it again because when they do it's an uphill struggle to convince them that they have.
You're being downmodded for not providing any supporting arguments, but there's some compelling protection for malicious modules in these other JS implementations.
That's... weird. And kind of hypocritical, given the quality of your own comment which (a) mentioned downvotes and (b) used a few more words that boil down to "module protection". At this point I'm not exactly elevating the conversation either, for which I apologize. But I do think brief comments like mine and the one I replied to are perfectly fine.
I moved on to Biome (which replaces both ESLint and Prettier) and while the IDE extensions have been a bit buggy, it's much faster and has fewer dependencies. It was always a pain to set up ESLint + Prettier.
ESLint these days doesn't have any styling related lints (unless you opt into them) which means that it works out-of-the-box with Prettier (or Biome's formatter, presumably).
My fear with Biome is missing out on type-aware lints, but I know Oxlint has had some success integrating the new Go typescript compiler, so maybe that will work out for Biome as well.
Good to someone, somewhere, telling everyone else what good is.
Arguably, code formatters should be configurable, to get a format for your code that you want. Unfortunately, prettier isn't one, and it is a form of regression in many communities at the cost of choice pruning.
It might be great for a CI pipeline for constraining how code should look (use prettier, dumbass!), but it isn't great for actually formatting code, as it just makes the code "prettier".
Using it as a precommit hook in OSS projects makes it so that people can write code however they want. But it ends up in the repo following the guidelines of the repo. Minimizing unnecessary back-and-forth with PRs. Extremely useful in my opinion.
Even though prettier has defaults, but they can be modified to quite some extent to suit your projects needs: https://prettier.io/docs/options
> Using it as a precommit hook in OSS projects makes it so that people can write code however they want.
That is the point of a formatter, so any formatter would do that (and there were many more active projects to allow formatting before prettier came around).
> quite some extent
Not really, and I have written prettier plugins to get around that constraint.
IMO, its not great, which is kind of how things work out when you try to do everything in one project.
> That is the point of a formatter, so any formatter would do that (and there were many more active projects to allow formatting before prettier came around).
No arguments here. You are free to choose the formatter you want.
> Not really, and I have written prettier plugins to get around that constraint.
Or you could simply use those better formatters you were talking about.
Yes, with the difference that Google would have to be compromised in order to poison the go distributable containing fmt tool. With js, it’s enough to poison any single one of the 1400 dependencies of the linter
I forgot that even though fmt will never suffer from middle man attacks downloading the Go toolchain, the standard library already covers 100% of the uses cases someone cares about using Go for, and no one is using CGO.
I used to use CGO quite a lot in linux-embedded environment.
And we had huge dependency chains as well to non-standard library stuff, nowhere near as bad as an average nodejs project but still not free from the problem.
I'm very much in favor of TS support directly in node. vitest has made it easier these days, but I've lost too much time over the years getting the balance just right when configuring test environments for .ts files.
trpc and ts-rest are a different animal in my opinion. I'm happy to use either one but won't deal with them in production. For trpc that's mainly due to the lack of owning API URLs and being able to more clearly manage deprecating old URLs gracefully.
For ts-rest I just tend to prefer owning that setup myself, usually with zod and shared typings for API request/response pairs. It also does irk me every time I import what is clearly an RPC tool named "-rest"
i switched to python a while ago. it has batteries included. i feel so much better now that i dont have to debug all the quirks of a half-baked system.
I work with Node every day, and the library ecosystem is a nightmare. Just keeping a project from falling apart takes a huge amount of effort. Libraries are either abandoned when the author moves on, or they push major releases almost every month. And there’s a new CVE practically every week.
Python libraries are much more stable and reliable.
> Just keeping a project from falling apart takes a huge amount of effort
I think the culture of importing libraries with lots of dependencies is a big contributor.
> Libraries are either abandoned when the author moves on
This applies to any OSS project. Generally speaking popular abandoned libraries get forked.
> or they push major releases almost every month
This sounds like a very bad library to use. I would not recommend having this type of library as a dependency in Node or even in Python for that matter.
> Python libraries are much more stable and reliable.
Not sure what would make python libraries magically more stable and more reliable. Maybe libraries with minimal dependencies would could be the reason. That is why I recommend 0 or minimal dependecy libraries for node.
I work with both node and python. I agree with you on node, it is a dependency disaster. But regarding python the problem is not with the libraries themselves but in the circus of pip vs conda vs poetry vs pipenv vs uv vs ...
It doesn’t. The comment you’re replying to is referring to tsx, the package that lets you execute ts files, not to running files with the tsx extension.
I've had a notion that LLMs can read Typescript types much better, than JSON schema types.
So, I've been tinkering around with a library that can generate schemas for structured JSON outputs, according to a Typescript-like custom schema definition: https://github.com/nadeesha/structlm
So far, I've been seeing promising results with accuracy on-par or better, but using 20-40% less tokens than JSON schemas.
A heat shield has some leakage of heat that the people inside know that there's heat, but enough cover that the team is shielded somewhat.
reply