Hacker News new | past | comments | ask | show | jobs | submit | hamandcheese's comments login

In my experience even small machine screws (M3) can cut their own threads into a properly sized hole, and function well enough for a small number of re-assembly. That said, I'm rarely designing for portability, I just find the right sized hole for my printer by printing some test prints.

Franz Jägerstätter was not a soldier.

>Drafted for the first time on 17 June 1940, Jägerstätter, aged 33, was again conscripted into the German Wehrmacht in October and completed his training at the Enns garrison.

https://en.wikipedia.org/wiki/Franz_J%C3%A4gerst%C3%A4tter

That kind of sounds like he was a soldier to me. Or trained to be one.


I agree with you, if it ran pyc code directly I would be okay saying it "runs python".

However it doesn't seem like it does, the pyc still had to be further processed into machine code. So I also agree with the parent comment that this seems a bit misleading.

I could be convinced that that native code is sufficiently close to pyc that I don't feel misled. Would it be possible to write a boot loader which converts pyc to machine code at boot? If not, why not?


I still think "dev environments" really ought to be running tests directly with your languages native tool. e.g. `cargo test`, `bundle exec rspec`, etc. If you make me run a VM that runs Kubernetes that runs a docker container that runs the tests, I will be very, very upset. Doing this properly and reliably can still be a lot of work, possibly more work if not relying on Docker is a design goal (which must be if you want to run natively on macOS).

There seem to be a lot of tools in this space. I wish they wouldn't call themselves tools for "dev environments" when they are really more like tools for "deploying an app to your local machine", which is rather different.


I actually have the exact opposite viewpoint: if you’re managing a platform with multiple teams what you are suggesting is way more of a pain than a standardized, container based workflow. You want a language agnostic test runner that runs generic commands. The reason for this is that you want to be able to quickly skill up engineers and have them able to quickly switch codebases since the interface (like tilt) is the same across all of them.

You give up a bit of snappiness, sure, but you can also keep the very small non container based tooling like linting outside of the container.


> You give up a bit of snappiness, sure, but you can also keep the very small non container based tooling like linting outside of the container.

You give up way more than snappiness. Doing real development work, i.e. compiling, testing, debugging, is very cumbersome in a remote environment.

So where do you want to spend your time? Bandaids to make remote development suck less, or effort to develop locally, natively? There is no free lunch. If you choose "neither", your developer experience is gonna suck. (Most companies choose "neither" by the way, either consciously or unconsciously).


> Doing real development work, i.e. compiling, testing, debugging, is very cumbersome in a remote environment.

it really isn’t? Of course it depends on the ecosystem. But for jvm for example you literally just expose your debugging port 5005 out of the container and boom step through, and other live debugging works just as well as outside of the container. And as of course you allude to, if you’re native you are facing a “works on my machine” problem unless you are all in on a hermetic and reproducible solution like bazel or nix. And chances are unless you are having that crack team of 10xers that a good hunk of your dev user base are going to struggle with the complexity and general ecosystem issues with these two solutions.

You’ve probably seen the worst world where people do containers wrong. And a lot of people do them wrong. But it’s pretty easy to learn how to do them right. Someone can study multi stage docker builds for half a day and write perfectly fast, cached first containerized builds. Proper buildkit cached local containers are extremely fast.

There’s other ways of course, each with their own tradeoffs. You can do everything in nix, and now you are spending your time fighting with nix. You can do everything in bazel and now you are spending time fighting with bazel. In the end your stuff is gonna go into a container anyway (for most people). You still need to understand the container technology because of that. so why not both reduce your toolchain sprawl and simultaneously recreate that exact environment on the local machine?


Not to mention the developer experience is usually sub-par.

I firmly believe that the primary way of interacting with my tests should be the ability to run them one by one from the IDE, and running the code should be run / attach with breakpoints.


It takes some work, but it's entirely possible to both use Docker and run individual tests with breakpoints (in a Docker container) in your IDE. For example, you can attach VS Code to a running container.

Yes, but it creates a restrictive and fragile happy-path when the aim imo should be closer to a lab/woodshop where you can take it apart however you like and need for the moment.

Shell containers under code are unbearably laggy and crappy.

I simple have a container for each project using my own container-shell

I run my bundles / whatever. Have all the tooling and can use VSCode to attach via ssh (I use orbstack, so I get project hostnames for free)

It’s the best workflow for me. I really wanted to like containers but again, it’s too heavy, buggy, bulky.

http://github.com/jrz/container-shell


True. I learned this the hard way.

Some things are trivial and nearly free - created_at, updated_at. I don't think engineers need to bring trivialities like this to a "product owner". Own your craft.

When the product you're developing is governed by regulations and standards you need to comply, owning your craft is doing things by the book, not adding fields on your own because it might be useful later.

So what? I've worked places with lots of regulation. Part of every development job is learning the product domain. In that case devs become comfortable with reading standard/law/regulations and anticipating when software implementation might interact with the areas covered.

Sure there were people who's job was to offload as much compliance work from everyone else; by turning it into internal requirements, participating in design discussion and specializing in ensuring compliance. But trying to isolate the development team from it is just asking for micromanagers.


> So what?

Think before you act. The machine has no brain. Use yours.

> Part of every development job is learning the product domain.

Yes.

> In that case devs become comfortable with reading standard/law/regulations and anticipating when software implementation might interact with the areas covered.

This is what I'm saying, too. A developer needs to think whether what they are doing is OK by the regulation they're flying against. They need to ask for permissions by asking themselves "wait, is this OK by the regulation I'm trying to comply?".

> But trying to isolate the development team from it is just asking for micromanagers.

Nope, I'm all for taking initiatives, and against micromanagement. However, I'm also against "I need no permission because I'm doing something amazing" attitude. So own your craft, "code responsibly".


Oh, I thought you were disagreeing with hamandcheese's point that every little decision doesn't need to go through a product owner before anything happens.

No, not at all. by "the book", I meant regulations, not the management. :)

But it happens to be that product managers know (or at least about) and keep tabs on the relavent regulatory environment. I think it’s not scalable if every SWE in the team is going to legal to understand things. Like why we actully do need to hard delete data when customers click the Delete button.

If you force every SWE to go to legal for every technical decision, or ask for permission, it's not scalable, yes. On the other hand, if the team is in the long haul of developing this kind of regulated applications, the knowledge will get accumulated over time, and it'll trickle down from product managers to seniors to juniors.

This is the kind of tribal knowledge you want to spread among a development team, and if a collaborative document of "Why it's done this way" can be propped up with pointers to relevant sections of the regulation, it'd be a very good thing.

Not unlike NASA's global Lessons Learnt document.


I never worked at a place with product owners, but their post made me appreciate my roles where I'm trusted to help design the product myself. Yeesh.

Being unable to even call the shot of whether a database table should have an updated_at or soft-delete sounds like a Dilbertian hellscape to me.


I think the tricky part lies on knowing which things can be done without consulting any product owner. I agree that created_at and updated_at don’t cause any harm. deleted_at on the other hand cannot be decided by engineers only (mainly because of GDPR reasons: if something is expected to be totally deleted, then that must be it). As usual, these kind of things are obvious to engineers with years of experience , not so much to newcomers.

A soft delete might not be, for compliance reasons (GDPR and the like). Otherwise I agree.

Soft deletes can be done in a GDPR-compliant way. But transparency is key.

The problem of course is that soft deletes are hard. As soon as you take sub-resources and relations into considerations, especially with shared ownership, things get complicated. SQL databases can usually handle cascading deletes and nulling but that doesn't work with soft deletes - also, if a soft delete exists to allow for a restore, how do you handle references you would null in an actual delete? Now you need to either track the deleted value or add logic to every query involving that reference to filter out soft deletes in addition to null references (which adds query complexity).


Although those can be more complicated, and it should be clear what they're for and why they exist. Will this result in an object having an updated_at timestamp elsewhere in a larger application? Is it clear which properties that refers to?

Woah, hadn't seen this before but this is really cool!

I was recently looking for a way to do low scale serverless db in gcloud, this might be better than any of their actual offerings.

Cloud firestore seems like the obvious choice, but I couldn't figure out a way to make it work with existing gcloud credentials that are ubiquitous in our dev and CI environments. Maybe a skill issue.


Adding signing as a requirement can easily make what was once a very simple distribution mechanism into something much more complex - now you need manage signing certificates and keys to be able to build your thing.

The cost is far far higher than the price.


But it doesn't in practice.

I develop and distribute few free apps for macOS, and building / notarising is never a problem.


In contrast to this point, as long as I use Xcode and do the same thing I've always done allowing it to manage provisioning and everything else, I don't have a problem. However, I want to use CI/CD. Have you seen what kind of access you have to give fastlane? It's pretty wild. And even after giving it the keys to the kingdom, it still didn't work. Integrating apple code signing with CI/CD is really hard, full of very strange error messages and incantations to make it "work".


I don't know about fastlane, since my CI/CD is just a shell script, and signing and notarising is as hard as (checking the script) running `codesign ...` followed by `notarytool submit ... --wait`

Yes, you need to put keys on the build server for the "Developer ID Application" (which is what you need to distribute apps outside of AppStore) signature to work.

You do not need to give any special access to anything else beyond that.

Anyway, it is indeed more difficult than cross-build for Darwin from linux and call it a day.


You seem to be comparing a single dev sending apps to the world vs a corporate team pushing to employees (if I get parent's case right).

In most cases, just involving account management makes the corporate case 10x more of a PITA. Doing things in a corporate environment is a different game altogether.


Do you distribute OSS software which requires notarizing? If so, have you found a way to let the community build the software without a paid developer account? I would be very interested in a solution which allows OSS development, relying on protected APIs without requiring that anyone who builds the app to have a paid developer account.


Code signing is absolutely disgusting practically and philosophically. It has very reasonable and good intent behind it, but the practical implementations cause great suffering and sadness both for developers (cert management, cost, tools) and end-users (freedom of computing).

It is ugly: https://hearsum.ca/posts/history-of-code-signing-at-mozilla/


I take it you feel the trade off for dev team inconvenience, vs end user security, is not worth it?


They're talking about internal software for internal users. It can be made insanely secure, but that surely isn't the primary concern in this case.


I'm just observing that the cost is a lot higher than $99/year.

I do this professionally, I maintain macOS CI workers for my employer. Apple doesn't make it easy.


My initial impression (looking at the examples, not actual usage) is that this feels like a weird half-way between Dockerfile and using actual code to define and compose software (i.e. like Nix). Not super appealing to me as a heavy Nix user.


Ahh, just what I need, shoddy Dockerfiles to define not only my applications but now my whole OS!


The next step is obviously:

curl https://install.os.shoddydockerfile.com | sh


You forgot the sudo in the second part.


You can use buildah(shell, python, etc) or nix to create OCI images.

This has nothing to do with docker.


Okay, but at that point why bother with the intermediate OCI images? Especially with nix, if you're gonna use Nix you may as well build the OS directly (i.e. use NixOS).


OCI isn't a particularly good image format, the only thing it has going for itself is that it's the thing Docker uses. I would absolutely not be surprised if 90% of future bootc OCI images are built with Dockerfiles.


A quick google seems to indicate that they are still doing business under that same name.

https://catandcloud.com/


Yah but it was only the apparel side that Caterpillar threatened, so they may have won or given up and stopped selling clothes. Looking at their online store they don't have anything with Cat printed on it right now.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: