"latency" and "throughput" are only mentioned in passing in the article, but that is really the crux of the whole "streaming vs. batch" thing. You can implement a stream-like thing with small, frequent batches of data, and you can implement a batch-like thing with a large-buffered stream that is infrequently flushed. What matters is how much you prioritize latency over throughput, or vice versa. More importantly, this can be quantified - multiply latency and throughput, and you get buffer/batch size. Congratulations, you've stumbled across Little's Law, one of the fundamental tenets of queuing theory!
A giga load balancer is no less viable than a giga Redis cache or a giga database. Rate limiting is inherently stateful - you can't rate limit a request without knowledge of prior requests, and that knowledge has to be stored somewhere. You can shift the state around, but you can't eliminate it.
Sure, some solutions tend to be more efficient than others, but those typically boil down to implementation details rather than fundamental limitations in system design.
The demands for internal memos aren't arbitrary, they follow due process of law. Plaintiff serves a subpoena. Defendant can contest the subpoena by filing a motion to quash with the judge, who either approves or denies the motion. But in a case like this where the defendant is a large corporation (or representative thereof) and the subpoenaed information is relevant to the facts of the case, the judge is likely to honor the subpoena.
Yes, single scalar values tell you nothing useful if you have no a priori knowledge of the shape of your distribution. Mean and variance are meaningless figures if your distribution is multimodal, for example. If you truly have to compress the distribution into a few numbers, then the best thing to do is to represent it as a series of quantile values. In cases where the distribution is unknown, evenly spaced quantile values are a good start.
This is needlessly pedantic. useRef/useEffect are tools to implement the model on top of an imperative reality. Things like canvas rendering APIs don't have a pure interface, but it's still obviously very useful to provide one (hence libraries like react-konva and react-three-fiber).
ARP is for the LAN devices. L2 switches don't rely on ARP to build up their forwarding tables, they can just inspect the source MAC of every Ethernet frame they receive, and correlate it with the port they receive it on. Frames with unknown destination MACs are broadcast, but that stops as soon as every device in the LAN has sent at least one frame.
The non-determinism of Rust's HashMap/HashSet is because the default hasher seeds itself using the operating system's RNG facilities. You can use the with_hasher() function to replace the default hasher with a PRNG that is deterministically seeded.
I feel like using calling conventions to massage the compiler's register allocation strategy is a hack. If the problem is manual control over register allocation, then the ideal solution should be... well, exactly that and no more? An annotation for local variables indicating "always spill this" (for cold-path locals) or "never spill this or else trigger a build error" (for hot-path locals). Isn't that literally why the "register" keyword exists in C? Why don't today's C compilers actually use it?
If the tail calling pattern made the code ugly, I would be more inclined to agree with this. But putting each opcode in its own function isn't so bad: it seems just as readable, if not more so, than a mondo function that implements every opcode.
By contrast, a mondo function that also has a bunch of register allocation annotations seems less readable.
I don't see how a hypothetical __attribute__((never_spill)) annotation on local variables would preclude splitting opcode logic into separate functions. It just means those functions would have to be inlined into the interpreter loop to avoid conflicts with calling convention constraints.
Agreed -- I'm just saying that the tail call pattern doesn't seem so bad to me. The shape it imposes on your code doesn't detract from readability in my opinion.