There exists a problem in real life that you can solve in the simple case, and invoke a theorem in the general case.
Sure, it's unintuitive that I shouldn't go all in on the smallest variance choice. That's a great start. But, learning the formula and a proof doesn't update that bad intuition. How can I get a generalizable feel for these types of problems? Is there a more satisfying "why" than "because the math works out"? Does anyone else find it much easier to criticize others than themselves and wants to proofread my next blog post?
Here's my intuition: you can reduce the variance of a measurement by averaging multiple independent measurements. That's because when they're independent, the worst-case scenario of the errors all lining up is pretty unlikely. This is a slightly different situation, because the random variable aren't necessarily measurements of a single quantity, but otherwise it's pretty similar, and the intuition about multiple independent errors being unlikely to all line up still applies.
Once you have that intuition, the math just tells you what the optimal mix is, if you want to minimize the variance.
This all hinges on the fact the variance is homogeneous to X^2, not X. If we look at the standard deviation instead, we have the expect homogeneity: stddev(tX) = abs(t) stddev(X). However, it is *not linear*, rather stddev(sum t_i X_i) = sqrt(sum t_i stddev(X_i)) assuming independent variables.
Quantitatively speaking, t^2 and (1-t)^2 are always < 1 iff |t| < 1 and t != 0. As such, the standard deviation of a convex combination of variables is *always strictly smaller* than the convex combination of the standard deviations of the variables. In other words, stddev(sum_i t_i X_i) < sum_i t_i stddev(X_i) for all t != 0, |t|<1.
What this means in practice is that the convex combination (that is, with positive coeffs < 1) of any number of random variables is always smaller than the standard deviation of any of those variables.
"the average bitrate for a 4K Blu-ray DVD can range between 48Mbps to 75Mbps. Some discs can also carry around 100Mbps or even 128Mbps, but these are more rare."
This aligns closely with the hiring practices I learned in Industrial and Organizational Psychology. The only thing missing is to have structured interviews to reduce interviewer bias.
The best predictors of job performance are a simulation of the job and past performance. This is not new research or a secret.
Pop-trauma allows people to continue to believe "dreams can be achieved through hard work" while not blaming themselves for not achieving their dreams.
The reason I'm not living the dream could be that it's impossible, or I haven't tried hard enough. I don't want to believe either of those. I'd rather believe that something happened to me in my past that rewired my brain to stifle my full potential. Then I could still hope to someday achieve my dreams, while not doing anything to progress towards them.
It's not popular because it's right. It's popular because it's so, so appealing.
Same with "should". I feel like most "should" statements aren't helpful. Something should be done a certain way, but in the end, society should be perfect and we shouldn't have this problem in the first place!
The idea of using randomness to extend cliffs really tickles my brain.
Consider repeatedly looping through n+1 objects when only n fit in cache. In that case LRU misses/evicts on every lookup! Your cache is useless and performance falls of a cliff! 2-random turns that performance cliff into a gentle slope with a long tail(?)
I bet this effect happens when people try to be smart and loop through n items, but have too much additional data to fit in registers.
This feels similar to when I heard they use bubble sort in game development.
Bubble sort seems pretty terrible, until you realize that it's interruptible. The set is always a little more sorted than before. So if you have realtime requirements and best-effort sorting, you can sort things between renders and live with the possibility of two things relative close to each other appearing a little glitched for a frame.
That's a different problem. To quickly sort a nearly sorted list, we can use insertion sort. However the goal is to make progress with as little as one iteration.
One iteration of insertion sort will place one additional element in its correct place, but it leaves the unsorted portion basically untouched.
One iteration of bubble sort will place one additional element into the sorted section and along the way do small swaps/corrections. The % of data in the correct location is the same, but the overall "sortedness" of the data is much better.
That's interesting. I never considered this before. I came across this years ago and settled on insertion sort the first time I tried rendering a waterfall (translucency!). Will have to remember bubble sort for next time
Quick sort usually uses something like insertion sort when the number of items is low, because the constants are better at low n, even if O(n) isn’t as good.
The Hanoi clock represents time by mapping disk positions to binary bits - each legal tower state uniquely encodes one moment, with the smallest disk moving every minute creating the beautiful recursive pattern where larger disks move exponentially less frequently.
I have a dream for a compiled reactive DSL for video game programming that makes replay and rollback netcode automagically, eliminates bugs in state management, and naturally expresses derived state and the simulation step/transition function, while still being performant enough for real time
The performance hit from all that indirection of registering, getters, setters, discover, traversal, and lambdas could be avoided if we could compile the dag into smartly nested ifs
There exists a problem in real life that you can solve in the simple case, and invoke a theorem in the general case.
Sure, it's unintuitive that I shouldn't go all in on the smallest variance choice. That's a great start. But, learning the formula and a proof doesn't update that bad intuition. How can I get a generalizable feel for these types of problems? Is there a more satisfying "why" than "because the math works out"? Does anyone else find it much easier to criticize others than themselves and wants to proofread my next blog post?