For the second case, is there an argument that this process was unambiguous and objective in terms of a fairly wide range of metrics over color spaces?
I don't actually have much idea of how far, numerically, colors vary due to different illumination, or due to different digitization processes.
In this case, the Piet palette uses about 8 different colors and 3 different "steps" of luminance, so it seems hard to mess it up too much even if it's based more on a human understanding of colors then a mathematic one (orange -> orange, light blue -> light blue, dark blue -> dark blue, white -> white, etc).
It's also interesting to note that, for Piet, the colors themselves don't have any particular meaning, the instructions are encoded by the differentials between one color and the "next" color in the program direction, with blocks of continuous colors encoding noops. So moving from red to light red is a [0,-1] change, (no hue, 1 step lightness), and moving from blue to dark red is a change of [4,1] (4 steps of hue, 1 step of darkness). So the exact colors don't matter too much
The bech32 format is a favorite of mine because it uses an alphabet that's designed to be unambiguous and its checksum is designed specifically to guarantee catching few character mistakes and make it possible to suggest where the mistake likely is. It also has a builtin human-readable purpose prefix at the front. Since it's all lowercase it also fits into the QR alphanumeric mode, which doesn't support mixed case so QR codes of bech32 IDs are more efficient.
UUIDs should not be used as database primary keys unless the DBMS recommends it or you have a well-studied special reason for it. Postgres and MySQL are meant to use bigserial by default, even Citus. Some special sharded DBMSes like Spanner need non-sequential pkeys, but even Spanner explicitly tells you to use uuid4 because k-sortable keys cause hotspotting: https://cloud.google.com/spanner/docs/schema-design#uuid_pri...
I understand the performance implications of using a UUID for a primary key. And if performance is your primary concern, then this is good advice for large tables.
But if I could go back 25 years and only give myself one bit of advice, it would be to use UUIDs as the primary key. Because in a different context to raw performance, it offers a lot of advantages.
While there are advantages in numerous areas, I'll focus on one for this post. The area of distributed data.
We started by running a database on prem. Each branch or store got their own db. 15 years later always-on networking happened. 15 years after that, all businesses have fibre.
So now all the branches use a giant shared online database. With merged data. Uuid based this task would be trivial. Bigint based, yeah, it's not.
Along the same timeline data started escaping from our database. It would go to a phone, travel around a bit, change, get new records, then come home. Think lots of sales folk, in places without reception, doing stuff.
So you're right in the context of a single database (cluster) which encompasses all the data all the time.
But in the context where data lives beyond the database, using uuids solves a lot of problems.
There are other places as well where uuids shine.
So as with most advice when it comes to SQL, I'd add "context matters".
When data lives beyond the database, you need a uuid, but it doesn't need to be your pkey. Even your typical backend-frontend app with a single DB will often send uuids over the API.
If you're copying a DB, mutating, then merging back in, you just have to reset the bigint pkeys. I can see how in some contexts that might be less convenient (or if merges are very frequent and reads are not, less performant), but that's a special case and not something to assume from the start. For example I've done merges like this before pretty easily with bigints, and I've also been in places where they start out with uuids pkeys then never benefit.
Bearing in mind that primary key, and clustered key are not necessarily the same thing, your point stands that the uuid does not need to be the clustered key.
Renumbering bigint primary keys, so as the effect a one-time merge, becomes substantially less trivial if the desire for minimal downtime, coupled with hundreds of related tables, and tens of sites are in play.
I can't speak for PG but MySQL at least has a built in function to resolve the time ordering issue when storing v1 UUIDs (and a corresponding function to restore them to a valid UUID).
The CUID readme is wrong. You can safely ignore anyone who says "cloud-native" while discussing performance unless they're explaining why "cloud-native" architectures are often the worst of all possible designs for performance.
In postgres for example, full_page_writes (default on, generally not safe to turn off unless you can be sure your filesystem can guarantee it) means you have to write the entire page to WAL if you write one record. This will make your WAL grow way faster if you're doing random IOs. So right off that bat that's going to be a huge write impact.
I recall manually entered commmands have access to some APIs that are not normally accessible. Maybe timing related? So in the theoretical extreme, a timing attack to access something in your entire system memory and upload it via HTTP... while you watch the game play :)
Do they? It's easy enough to 'give' any API accessible to the console to the page by setting a variable from the page to point to a console-only accessible function.
Given there aren't many sites that say "just open the console and paste this command to win the big prize", I suspect that any console-only API's aren't very powerful if they exist at all.
Because normies are extremely susceptible to things like "hack Facebook, see the PMs of any hot girl u want!111 just right-click and paste this into your browser console trust me bro"
As well as the usual engagement driving "challenges" like "Omg did you know there's no country starting with Z! Bet you can't think of one!" meanwhile comments are filled with "duuuuh Zanzibaaaar!" and post engagement is >>>>>>>>>>>>
Agreed, when Ruby became popular it was primarily competing against languages with verbose and rigid static type systems like Java. That is no longer the case, and the benefits of safety and developer experience provided by a good type system outweighs the cost.
They didn't mean that, they meant siphoning off data client side, for reasons, like CSAM.
The point, which I agree with, is having to trust a single closed source implementation of a client is not so different to trusting the servers of a non E2E service.
The BIG difference is that you have to trust the hardware and the operating system already, and as these are made by apple, you already have to trust them.
"Trusting the servers of a non E2E service" is adding another trusted party.
If you don't trust apple, you don't have an iPhone.
We're solving the performance pitfall of CSS-in-JS libraries by making a compiler instead. In the vast majority of cases, there should be zero runtime cost to using StyleX.
I like it a lot. It lets you define a theme with design tokens like spacing and colors, then enforces using only those via TypeScript (with an escape hatch via style prop), which also gives you editor autosuggest support without hacks like the Tailwind extension.
- for Perl, OCR to a character set
- for Piet, manual “convert[ion] into a clean image file using close colours from the Piet palette”