Hacker News new | past | comments | ask | show | jobs | submit login

Ubuntu-minimal ( https://canonical.com/blog/minimal-ubuntu-released ) doesn't have any of those binaries though.

And that's also why you have multistage docker builds. To make sure your production container doesn't have all the unneeded files from your development container. https://docs.docker.com/develop/develop-images/multistage-bu... .




this removes far more then multistage docker build ever would, do you need bash dash or passwd or many other binaries and files in image that are in by default no you dont only way to do anything simular to what docker-slim does is with scratch image which doesnt work if you dont copy everything you need in


Still seems kind of silly. If you base everything on ubuntu minimal, you'll only have the one copy of that base image, which is a fraction of the size of the `docker` and `dockerd` binaries added together. No server running docker will have a problem keeping one or two versions of ubuntu minimal on it.

But if you go around "minifying" all your applications independently, you won't have that shared base layer. One application needs `sh` and another doesn't? Now you get two entire base layers, one with it and one without. Sure, each image's total size will be less, but the size of all your different images added up will be greater because you killed the sharing.

If for some reason the 29 megs of ubuntu minimal (or even fewer for alpine) are a problem (which they aren't on your server that already has over a hundred megs of `docker` binaries), then the right solution is to better control layer sharing. Ensure that you don't have different base layers between your applications. And then--strictly for kicks and giggles--you could minify that base layer to the minimal set of what all your images require. To save a 51K `passwd` binary (woohoo!).


one question is is possible in any kind of way that that passwd or any other binary that stays that you dont need has a security vulnarability that could if someone got into the container in one way or another(most likely your app) cause trouble on the host.

hint yes it is and that could be a problem a giuant one


The problem is not about removing though. The problem is what/who guarantees that nothing broke after all these files are removed? Especially in obscure code paths in nested dependencies?

With something like alpine linux/ubuntu minimal, you trust the package maintainers to make sure that if you use python in your docker image it would work like it worked for them. Out here, it just says "Yes (it is safe)! Either way, you should test your Docker images.".

As a bad example, if a library used by your application uses a different "theme" requiring different files at night and different files during the day, you might still say "it worked during my tests" but things definitely broke and the only thing you can blame is this overzealous tool.

That bad example was from back when i was trying to make AppImages for an application we used. At first all we did was recursively collect all the libraries reported by ldd. Then it turned out some libraries were only being dlopen'ed by other libraries under specific circumstances and we missed them. So we manually added those libraries. Then it turned out that we missed the config files and other resources used by those libraries. Eventually we shipped all the files belonging to all the distro packages used by the libraries we used and left it at that.


your tests and your application knowledge should

in some cases i essentialy ensure my whole app remains using --include-path flags so that i get a removal of you know things that i absolutly dont need.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: