Self-hosting gitlab is not and has never been easy if you do it right, it's very heavy on resources and take lots of time and effort to upgrade. It's also extremely complex and has many moving parts. Stick to gitea or forgejo, they upgrade just by restarting the daemon. MySQL for the database if you want comparable ease of maintenance (same thing: upgrading between major versions requires replacing the binaries and restarting the daemon).
Seems like a rather easy thing to go wrong in the client, no?
User sends message via client. Client fumbles the recipient id. Message ends up at the wrong recipient.
Examples: incorrect recipient ID attached to contact in list where users selects recipient. Buggy selection of multiple targets in the selection UI due to incorrect touch event handling. Incorrect deletion of previously selected and then deselected recipient from recipient array of multitarget message. Or if working low level even a good old off by one error and reading out of bounds data for the recipient list (though that one hopefully should trigger a faulty send request due to other stuff no longer matching). There is endless examples.
The server can't really safeguard against the client providing a legitimate send request even though the user intended to send it to another recipient.
I'm not sure if I missed a bit here, but I have some colleagues doing research on unikernels for HPC and the point is that this unikernel is running directly on the hardware or hypervisor and not inside another VM. The unikernel is effectively a minimal VM and the network stack is one of the things they struggle the most with due to sheer effort.
[One of the authors of the paper] I wouldn't recommend writing a network stack from scratch, that is a lot of effort. Instead, with the Unikraft LF project (www.unikraft.org) we took the lwip network stack and turned it into a Unikraft lib/module. At KraftCloud we also have a port of the FreeBSD stack.
JSON for something that essentially mirrors a shell installation process? Feels like you are trying to reinvent things with a golden hammer, not like actually making it easier.
It's more like not using semi-autonomuous driving features with the car entirely relying on your expertise to realise and correct when it's making a mistake. The main difference is risk. Your own life at stake versus bugs in some production system.
But why are you pushing errors to production? You know you're allowed to fix the LLM's code output, right?
If a robot could paint your house, but made three small errors, would you refuse to use it? Or would you just fix the three small errors by painting over them?
There's some kind of John Henry complex going on in this AI discussion.
You're working under the assumption that you will be able to find the errors. I personally found reviewing code always way harder than writing it and we already push tons of bugs to production in written+reviewed code.
Due to the nature of my current work I haven't really used GPT for coding yet, but I keep wondering isn't it easier to write code than to read and truly understand it? So how much development time was really saved, if I still care about off-by-one errors or correct identity checks in hash maps or all those edge cases I probably should care about? Those are all things much harder to spot reading than writing the code.
So I keep wondering if we just save time by introducing more unknown bugs using GPT?
I guess this also has a lot to do with what code is written. I would be much more concerned with a system level C++ library than some JavaScript CRUD.
Your comment is a great example of how intelligent humans hallucinate!
Gpt-4 often handles errors well. The generated code is easy to review if you understand what you asked for (if it generates tests and examples too-which it can). Etc
People have different experiences with any given thing. You might find your TV remote intuitive and easy to us. Your grandparents might find it overwhelmingly difficult. Which is true?
Yeah, sorry for the snark. I was mostly referring to how the comment I replied to said they hadn’t actually used GPT for coding, but had lots of ideas about its limitations etc.
If you read closely I am not talking about the limitations of GPT, but about the limitations of the developer having to fix code of others (GPT essentially being another developer). I guess it depends on the complexity of the problem. A lot of stuff is very easy to review and for me other stuff needs more time to understand than it did to write.
According to the RFC's PUT can be used to create a resource as long as the request has a clear target. That doesn't necessarily mean an ID, but just some kind of identifier (composite key, hash of content, url, etc.). When a create instead of update happens the server should respond with a 201 CREATED, which should indicate the location or identifier of the created resource.