Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Here's my `npm` command these days. It reduces the attack surface drastically.

  alias npm='docker run --rm -it -v ${PWD}:${PWD} --net=host --workdir=${PWD} node:25-bookworm-slim npm'

  - No access to my env vars
  - No access to anything outside my current directory (usually a JS project).
  - No access to my .bashrc or other files.
Ref: https://ashishb.net/programming/run-tools-inside-docker/


That seems a bit excessive to sandbox a command that really just downloads arbitrary code you are going to execute immediately afterwards anyways?

Also I can recommend pnpm, it has stopped executing lifecycle scripts by default so you can whitelist which ones to run.


At work, we're currently looking into firejail and bubblewrap a lot though and within the ops-team, we're looking at ways to run as much as possible, if not everything through these tools tbh.

Because the counter-question could be: Why would anything but ssh or ansible need access to my ssh keys? Why would anything but firefox need access to the local firefox profiles? All of those can be mapped out with mount namespaces from the execution environment of most applications.

And sure, this is a blacklist approach, and a whitelist approach would be even stronger, but the blacklist approach to secure at least the keys to the kingdom is quicker to get off the ground.


firejail, bubblewrap, direct chroot, sandbox-run ... all have been mentioned in this thread.

There is a gazillion list of tools that can give someone analysis paralysis. Here's my simple suggestion: all of your backend team already knows (or should) learn Docker for production deployments.

So, why not rely on the same? It might not be the most efficient, but then dev machines are mostly underutilized anyway.


> Also I can recommend pnpm, it has stopped executing lifecycle scripts by default so you can whitelist which ones to run.

Imagine you are in a 50-person team that maintains 10 JavaScript projects, which one is easier?

  - Switch all projects to `pnpm`? That means switching CI, and deployment processes as well
  - Change the way *you* run `npm` on your machine and let your colleagues know to do the same
I find the second to be a lot easier.


I don’t get your argument here. 10 isn’t a huge number in my book but I don’t know of course what else that entails. I would opt for a secure process change over a soft local workflow restriction that may or may not be followed by all individuals. And I would definitely protect my CI system in the same way than local machines. Depending on the nature of CI these machines can have easy access rights. This really depends how you do CI and how lacks security is.


I'll do soft local workflow restriction right away.

The secure process change might take anywhere from a day to months.


There are a great many extra perks to switching to pnpm though. We switched on our projects a while back and haven’t looked back.


Yeah, id just take the time to convert the 10 projects rather than try to get 50 people to chnage their working habots, plus new staff coming in etc.

Switch your projects once, done for all.


So, switching to pnpm does not entail any work habit changes?


Am I missing something? Don't you also need to change how CI and deployment processes call npm? If my CI server and then also my deployment scripts are calling npm the old insecure way, and running infected install scripts/whatever, haven't I just still fucked myself, just on my CI server and whatever deployment system(s) are involved? That seems bad.


Your machine has more projects, data, and credentials than your CI machine, as you normally don't log into Gmail on your CI. So, just protecting your machine is great.

Further, you are welcome to use this alias on your CI as well to enhance the protection.


Attacking your CI machines means to poison your artifacts you ship and systems they get deployed to, get access to all source it builds and can access (often more than you have locally) and all infrastructure it can reach.

CI machines are very much high-value targets of interest.


> Further, you are welcome to use this alias on your CI as well to enhance the protection.

Yes, but if I've got to configure that across the CI fleet as well as in my deploy system(s) in order to not get, and also be distributing malware, what's the difference between having to do that vs switching to pnpm in all the same places?

Or more explicitly, your first point is invalid. Whether you ultimately choose to use docker to run npm or switch to pnpm, it doesn't count to half-ass the fix and only tell your one friend on the team to switch, you have to get all developers to switch AND fix your CI system, AND also your deployment system (s) (if they are exposed).

This comment proffers no option on which of the two solutions should be preferred, just that the fix needs to made everywhere.


You do the backward logic here. I would go for a single person to deal with pnpm migration and CI rather than instruct other 10 for everyone to hopefully do the right thing. And think about it when the next person comes in... so I'd go for the first option for sure.

And npm can be configured to prevent install scripts to be run anyways:

> Consider adding ignore-scripts to your .npmrc project file, or to your global npm configuration.

But I do like your option to isolate npm for local development purposes.


> which one is easier?

> Switch all projects to `pnpm`?

Sorry; I am out of touch. Does pnpm not have these security problems? Do they only exist for npm?


pnpm doesn't execute lifecycle scripts by default, so it avoids the particular attack vector of "simply downloading and installing an NPM package allows it to execute malicious code."

As phiresky points out, you're still "download[ing] arbitrary code you are going to execute immediately afterwards" (in many/most cases), so it's far from foolproof, but it's sufficient to stop many of the attacks seen in the wild. For example, it's my understanding that last month's Shai-Hulud worm depended on postinstall scripts, so pnpm's restriction of postinstall scripts would have stopped it (unless you whitelist the scripts). But last month's attack on chalk, debug, et al. only involved runtime code, so measures like pnpm's would not have helped.


Exactly so you should still execute all JS code in a container.


> That seems a bit excessive to sandbox a command that really just downloads arbitrary code you are going to execute immediately afterwards anyways?

I won't execute that code directly on my machine. I will always execute it inside the Docker container. Why do you want to run commands like `vite` or `eslint` directly on your machine? Why do they need access to anything outside the current directory?


I get this but then in practice the only actually valuable stuff on my computer is... the code and data in my dev containers. Everything else I can download off the Internet for free at any time.


No.

Most valuable data on your system for a malware author is login cookies and saved auth tokens of various services.


Maybe keylogging for online services.

But it is true that work and personal machines have different threat vectors.


Yes, but I'm willing to bet most workers don't follow strict digital life hygiene and cross contaminate all the time.


You don't have any stored passwords? Any private keys in your `.ssh/`? DB credentials in some config files? And the list goes on and on.


I don't store passwords (that always struck me as defeating the purpose) and my SSH keys are encrypted.


This kind of mentality, and "seems a bit excessive to sandbox a command that really just downloads arbitrary code", is why the JS ecosystem is so prone to credential theft. It's actually insane to read stuff like that said out loud.


Right but the opposite mentality winds up putting so much of the eggs in the basket of the container that it defeats a lot of the purpose of the container.


It's weird that it's downvoted because this is the way


maybe i'm misunderstanding the "why run anything on my machine" part. is the container on the machine? isn't that running things on your machine?

is he just saying always run your code in a container?


> is the container on the machine?

> is he just saying always run your code in a container?

yes

> isn't that running things on your machine?

in this context where they're explicitly contrasted, it isn't running things "directly on my machine"


it annoys me that people fully automate things like type checkers and linting into post commit or worse entirely outsourced to CI.

Because it means the hygiene is thrown over the fence in a post commit manner.

AI makes this worse because they also run them "over the fence".

However you run it, i want a human to hold accountability for the mainline committed code.


I run linters like eslint on my machine inside a container. This reduces attack surface.

How does this throw hygiene over the fence?


Yes in a sibling reply, i was able to better understand your comment to mean "run stuff on my machine in a container"


pnpm has lots of other good attributes: it is much faster, and also keeps a central store of your dependencies, reducing disk usage and download time, similar to what java/mvn does.


> command that really just downloads arbitrary code you are going to execute immediately afterwards anyways?

By default it directly runs code as part of the download.

By isolation there is at least a chance to do some form of review/inspection


I've tried use pnpm to replace npm in my project, it really speed up when install dependencies on host machine, but much slower in the CI containers, even after config the cache volume. Which makes me come back to npm.


> That seems a bit excessive to sandbox a command that

> really just downloads arbitrary code you are going to

> execute immediately afterwards anyways?

I don't want to stereotype, but this logic is exactly why javascript supply chain is in the mess its in.


This folk-wisdom scapegoating of post-install scripts needs to stop or people are going to get really hurt by the false sense of security it's creating. I can see the reasoning behind this, I really do, it sounds convincing but it's only half the story.

If you want to protect your machine from malicious dependencies you must run everything in a sandbox all the time, not just during the installation phase. If you follow that advice then disabling post-install scripts is pointless.

The supply chain world is getting more dangerous by the minute and it feels like I'm watching a train derail in slow motion with more and more people buying into the idea that they're safe if they just disable post-install scripts. It's all going to blow up in our collective faces sooner or later.


You're right, it's not just about post-install scripts nor NPM and JavaScript. This is a deep fundamental issue in software development practices across most programming language ecosystems. And not only open-source, since proprietary code is often even worse about vetting dependencies. More eyes are better, therefore open source (or source available) is a prerequisite for security.

Improving NPM's processes, having more companies auditing packages, changing the way NPM the tool works - that's all good but far from enough, it doesn't actually address the root vulnerability, which is that we're way too comfortable running other people's arbitrary code.

Containerization and strict permissions system by default seem to be where we're headed.


Exactly for JS code, run all inside the container, all the time.

I have been doing it for weeks now with no issues.


There are so many vectors for this attack to piggyback off from.

If I had malicious intentions, I would probably typo squat popular plugins/lsps that will execute code automatically when their editor runs. A compromised neovim or vscode gives you plenty of user permissions, a full scripting language, ability to do http calls, system calls, etc. Most LSPs are installed globally, doesn't matter if you downloaded it via a docker command.


> A compromised neovim or vscode gives you plenty of user permissions, a full scripting language, ability to do http calls, system calls, etc. Most LSPs are installed globally, doesn't matter if you downloaded it via a docker command.

Running `npm` inside Docker does not solve this problem. However, running `npm` inside Docker does not make this problem worse either.

That's why I said running `npm` inside Docker reduces the attack surface of the malicious NPM packages.


I think this approach is harmful because it gives people a false sense of security and makes them complacent by making them feel like they're "doing something about it even if it's not perfect". It's like putting on 5 different sets of locks and bolts on your front door while leaving the back door unlocked and wide open during the night.


Done is better than perfect, and a 100% secure system doesn't exist. Given how prolific those supply-chain attacks are, any mitigation (even if imperfect) seems to be good step toward protecting yourself and your assets


This isn't just "imperfect", it's so deeply flawed that the next minor "mutation" of supply chain attack tactics is guaranteed to wipe you out if you rely on it. It's just a matter of time, it could be tomorrow, next month, maybe a year from now.

Setting up a fully containerized development environment doesn't take a lot of effort and will provide the benefits you think you're getting here - that would be the "imperfect but good enough for the moment" kind of solution, this is just security theater.

Every time I make this point someone wants to have the "imperfect but better than nothing" conversation and I think that shows just how dangerous the situation is becoming. You can only say that in good conscience if you follow it up with "better than nothing ... until I figure out how to containerize my environment over the weekend"


Unfortunately, the current way of how things work is, like you said, "deeply flawed". You will not change it in a few months, not even in a few years too.

What you can do, however, is to adapt to current threats, the same way adversaries adapt to countermeasures. Fully secure setups do not exist, and even if one existed, it would probably become obsolete very quickly. Like James Mickens said, whatever you do, you still can be "Mossad'ed upon". Should we give up implementing security measures then?

Thinking about security in a binary fashion and gatekeeping it ("this is not enough, this is will not protect you against X and Y") is, _in my opinion_, very detrimental.


Sorry, what I'm proposing here is far from binary but yes, I am gatekeeping the bare minimum. It has to be gatekept otherwise I'm afraid we'll enter a state of mass industry delusion where everyone thinks they're safe, "doing something", even "following industry best practices", but in reality all we're doing is: [1]

If a supply chain attack would be a serious incident for you then you need to take meaningful actions to protect yourself. I'm trying to hold "you" (proponents of this idea) accountable because I don't want you to rationalize all of these actions that leave you wide open to future attacks in known, preventable ways and then pretend nobody could've seen this coming when it blows up in our face.

It's not "good enough" - it's negligence because you know that the hole is there, you know that the hackers know about it, you know how trivial it is to exploit, and yet we're arguing whether leaving this known vector open is "good enough" instead of taking the obvious next step.

Containers are very far away from "fully secure" setup, they suffer from escape vulnerabilities far more often than VMs, but their benefit is that they're lightweight and minimally obtrusive (especially for web development) and these days the integration with IDEs is so good you probably won't even notice the difference after a few days of adjusting to the new workflow.

You end up trading a bit of convenience for a lot of security - you're going to be reasonably well protected for the foreseeable future because the likelihood of someone pulling off a supply chain attack and burning a container escape 0-day on it is really low.

That should be good enough for most and if your developer machine needs more protection than this, I'll take a guess and say that your production security requirements are such that they require security review of all your dependencies before using them anyway.

With a VM you would get even more security but you would have to sacrifice a lot more convenience, so given the resistance to the less intrusive option this isn't even worth discussing right now.

How can you not see the nuance?

[1] https://i.imgur.com/Zj6rwEK.jpeg


> alias npm=...

I use sandbox-run: https://github.com/sandbox-utils/sandbox-run

The above simple alias may work for node/npm, but it doesn't generalize to many other programs available on the local system, with resources that would need to be mounted into the container ...


> The above simple alias may work for node/npm, but it doesn't generalize for many other programs that are available on the local system, with resources that would somehow have to get mounted into the container ...

Thanks. You are right, running inside Docker won't always work for local commands. But I am not even using local commands.

Infact, I have removed `yarn`, `npm`, and several similar tools already from my machine.

It is best to run them inside Docker.

> I use sandbox-run: https://github.com/sandbox-utils/sandbox-run

How does this work if my local command is a Mac OS binary? How will it run inside Docker container?


Or use ‘chroot’. Or run it as a restricted owner with ‘chown’. Your grandparents solutions to these problems still work.


That'll still allow access to env vars, and interact with other processes owned by the same user.

At the very least, you really need to add process isolation / namespacing as well - at which point it's going to be easier to just use the sandboxing / containerisation tool of your choice to manage it all for you.


That definitely helps and worth doing. On Mac though I guess you need to move the entire development to containers due to native dependencies.


My primary dev environment is containers, but you can do a hell of a lot with nix on a mac.


Not sure how secure this really is, because it's fairly easy to break out of a Docker container with the default settings (due to the fact that the kernel is shared between containers and the host, unlike with VMs). Rootless Docker (or better, Podman) would improve security greatly.


Can you show an example of how can a malicious package break out of docker?


There have been quite a few exploits over the years, with the most recent public CVE 2 years ago [1].

Your specific setup uses `--net=host` and this opens you up to potential vulnerabilities (see [2]).

You also shouldn't forget that containers have unrestricted network access bu default anyway. Even if your device is safe, they may be able to infect other vulnerable devices on your network.

[1]: https://docs.docker.com/security/security-announcements/#doc... [2]: https://github.com/0xn3va/cheat-sheets/blob/main/Container/E...


Won't that still download malicious packages that are dep?


Yeah but those deps won't access your browser cookies and tour secret keys that are outside the current directory.


Doesn't Docker run as `root` by default, and alter the permission of your folder by taking ownership and making it as `root:root` ?


Just tested it and apparently it's not the case, neat!


This will break native dependencies when the host platform is not the same as the container platform.


My host is Mac OS. My container platform is Linux. Can you share an example where this approach will cause a failure?


> triple-backtick code blocks

If only :(


You should probably put some quotes around `$PWD`:

  alias npm='docker run --rm -it -v "$PWD:$PWD" --net=host --workdir="$PWD" node:25-bookworm-slim npm'
...Does this always work? I don't use much JS. Doesn't NPM sometimes build against system libs, which could be different in the container?


> You should probably put some quotes around `$PWD`:

Yeah, fair point.

> Doesn't NPM sometimes build against system libs, which could be different in the container?

Yes, and I run all JS inside a container.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: