I am a Site Reliability Engineer (SRE), Google Style, with experience at both large and small organizations. I can help you build a Platform Engineering practice from the very beginning. I'm looking to help small dev teams increase their velocity by implementing best-practices of Devops: CI/CD, Kubernetes Deployments, and effective Monitoring frameworks.
I am a Site Reliability Engineer (SRE), Google Style, with experience at both large and small organizations. I can help you build a Platform Engineering practice from the very beginning. I'm looking to help small dev teams increase their velocity by implementing best-practices of Devops: CI/CD, Kubernetes Deployments, and effective Monitoring frameworks.
The reason to target k8s on cloud vms is that cloud VMs don't subdivide as easily or as cleanly. Managing them is a pain. K8s is an abstraction layer for that - Rather than building whole machine images for each product, you create lighter weight docker images (how light weight is a point of some contention), and you only have to install your logging, monitoring, and etc once.
Your advice about bigger machines is spot on - K8s biggest problem is how relatively heavyweight the kublet is, with memory requirements of roughly half a gig. On a modern 128g server node that's a reasonable overhead, for small companies running a few workloads on 16g nodes it's a cost of doing business, but if you're running 8 or 4g nodes, it looks pretty grim for your utilization.
You can run pods, with podman and avoid the entire k8s stack or even use minikube on a machine if you wanted to. Now that rootless is the default in k8s[0] the workflow is even more convenient and you can even use systemd with isolated users on the VM to provide more modularity and seporation.
It really just depends on if you feel that you get value from the orchestration that full k8s offers.
Note that on k8s or podman, you can get rid of most of the 'cost' of that virtualization for single placement and or long lived pods by simply sharing a emptyDir or volume shared between pod members.
There is enough there for you to test to see that the performance is so close to native sharing unix sockets that way, that there is very little performance cost and a lot of security and workflow benefits to gain.
As podman is daemonless, easily rootless, and on mac even allows you to ssh into the local linux vm with `podman machine ssh` you aren't stuck with the hidden abstractions of docker-desktop which hides that from you it has lots of value.
Plus you can dump a k8s like yaml to use for the above with:
podman kube generate pgdemo-pod
So you can gain the advantages of k8s without the overhead of the cluster, and there are ways to launch those pods from systemd even from a local user that has zero sudo abilities etc...
I am using it to validate that upstream containers don't have dial home by producing pcap files, and I would also typically run the above with no network on the pgsql host, so it doesn't have internet access.
IMHO the confusion of k8s pods, being the minimal unit of deployment, with the fact that they are just a collection of containers with specific shared namespaces in the general form is missed.
As Redhat gave podman to CNCF in 2024, I have shifted to it, so haven't seen if rancher can do the same.
The point being is that you don't even need the complexity of minikube on VM's, you can use most of the workflow even for the traditional model.
The NSA has railroaded bad crypto before [1]. The correct answer is to just ignore it, to say "okay, this is the NSA's preferred backdoored crypto standard, and none of our actual implementations will support it."
It is not acceptable for the government to be forcing bad crypto down our throats, it is not acceptable for the NSA to be poisoning the well this way, but for all I respect DJB, they are "playing the game" and 20 to 7 is consensus.
I found the opposite problem. I tried to hang out with non-tech people, I spent a lot of time hanging out with non-tech people. The kinds of people who I ended up hanging out with, and I recognized that this might just be a me problem, though I certainly tried to avoid pigeonholeding myself, were not great people. The great non-tech people I met didn't stay very long.
I am a Site Reliability Engineer (SRE), Google Style, with experience at both large and small organizations. I can help you build a Platform Engineering practice from the very beginning. I'm looking to help small dev teams increase their velocity by implementing best-practices of Devops: CI/CD, Kubernetes Deployments, and effective Monitoring frameworks.
Anubis's design is copied from a great botnet protection mechanism - You serve the Javascript cheaply from memory, and then the client is forced to do expensive compute in order to use your expensive compute. This works great at keeping attackers from attempting to waste your time; It turns a 1:1000 amplification in compute costs into a 1000:1.
It is a shitty, and obviously bad solution for preventing scraping traffic. The goal of scraping traffic isn't to overwhelm your site, it's to read it once. If you make it prohibitively expensive to read your site even once, nobody comes to it. If you make it only mildly expensive, nobody scraping cares.
Anubis is specifically DDOS protection, not generally anti-bot, aside from defeating basic bots that don't emulate a full browser. It's been cargo-culted in front of a bunch of websites because of the latter, but it was obviously not going to work for long.
> The goal of scraping traffic isn't to overwhelm your site, it's to read it once.
If the authors of the scrapers actually cared about it, we wouldn't have this problem in the first place. But today the more appropriate description is: the goal is to scrape as much data as possible as quickly as possible, preferably before your site falls over. They really don't care and side effects beyond that. Search engines have an incentive to leave your site running. AI companies don't. (Maybe apart from perplexity)
First of all Anubis isn't meant to protect simple websites that gets read once. It's meant for things like a gitlabs instance where AI bots are indexing every single commit of every single file. Resulting in thousands of not millions of reads. And reading an Anubis page once isn't expensive either. So I don't really understand what point you are trying to make as the premise seems completely wrong.
This is important work, and I thank you for it. These public transparency logs are important for keeping honest people honest, but also for keeping dishonest people out - If someone does manage to backdoor Google's build process, this is how they'll know.
Why is this important work to you? Reproducible builds to me is a complete waste of engineering resources and times that could be used elsewhere. All of this work goes towards protecting against theoretical attacks rather than practical ones that are actually happening in the wild.
Distributing software is a lot harder than just building it (with the caveat that people don't want to install build dependencies).
So we rely on centralized distribution (and build).
Because of this we have to assume trust of that entire chain.
When builds are reproducible they are independently verifiable which means you only have to trust the code and not the entire distribution chain (build systems, storage, etc).
Of course if no one bothers to verify then it doesn't matter.
This is sort of how xz happened, no one verified that the release tarballs were what they were purported to be.
Wasn't the vulnerability triggered by a malicious script that was added silently to the tarball? Reproducible builds would have shown that the tarball is not the exact output of the build. Even though the malicious payload was already in the code, the trigger was not and was hidden
>Reproducible builds would have shown that the tarball is not the exact output of the build
That is not what reproducible builds do. Reproducible builds shows that the compiled binary comes from the inputs. You have to use the same inputs as the distro else it will most likely not match. The vulnerability is part of the input which means that anyone else reproducing the build would have a byte exact copy of the vulnerable library and no discrepancy would be found. Reproducible builds would monitor for when the builds don't match.
In this scenario you could compare release tarbells against the git repository, but that has nothing to do with reproducible builds.
If you do reproducible builds for only the binary of the program and not what's around it I don't know if it makes any sense. Related software like the installation script should be checked too against the source. Otherwise that would be like signing the binary but not the whole package.
In case of XZ, the source code was modified, in the install script and not in the binary itself. Checking against a reproducible tarball would have shown the package is not identical, as the trigger was put manually by the maintainer and not checked in the repo. If you had a "byte exact copy" of the repository, it would show immediately it's not the same used to build the package.
Otherwise, reproducible builds are useless if you only check for the binary and not the whole generated package, as XZ has shown, because the malicious code could be somewhere else than the binary.
Nix packages seem to be geared toward reproducible builds of the whole package and not just the binary. So it seems possible to do.
Maybe some of them were preventable, but if it was in place attackers would easily adapt to fool the automated systems and we would be back at status quo.
>without reproducible build you can't independently verify anything.
This is myth propagated by reproducible builds people. Byte for byte similarity is not required to detect a Trojan was injected into one.
There are tons of people in the West who have no qualms about doing this for pure crime purposes; many of them are the ones who espouse most ardently that doing this work for the government is immoral.
I am a Site Reliability Engineer (SRE), Google Style, with experience at both large and small organizations. I can help you build a Platform Engineering practice from the very beginning. I'm looking to help small dev teams increase their velocity by implementing best-practices of Devops: CI/CD, Kubernetes Deployments, and effective Monitoring frameworks.
My resume: https://resume.gauntletwizard.net/ThomasHahnResume.pdf
My LinkedIn: https://www.linkedin.com/in/thomas-hahn-3344ba3/
My Github: https://github.com/GauntletWizard
reply