Hacker News new | past | comments | ask | show | jobs | submit | alxv's comments login

The 35mm format is rather amazing balance of design trade-offs. No wonder it is so enduring.

It's large enough for the double-Gauss lens (a.k.a. normal prime lens) to have a nice shallow deep of field wide open. f/1.2 is close to the limit of a typical SLR mount. So a normal 50mm f/1.2 lens gets us 42mm of aperture. This means we can get the same depth of field and angle of view as a 6x7 medium format with a 110mm f/2.8 lens or a 4x5 large format with a 180mm f/4.5 (but with a much smaller system!) And we get a faster lens as a bonus.

Smaller formats lose some of that versatility of composition. The focal length of the normal lens on APS-C is 32mm. We would need f/0.8 lens to the same DOF. That's not possible on an SLR mount. Mirrorless systems with their shorter flange distances could get us there. But even then there are limits to how short the flange can be because image sensors become much less efficient as the angle of incidence of the light hitting them increases.


You only need 2 processors, since each cores gives you 2 vCPUs. "For the n1 series of machine types, a virtual CPU is implemented as a single hardware hyper-thread" --https://cloud.google.com/compute/docs/machine-types


so the bare metal might still have 4, such that it could host 2 of these VMs at once I guess.


There is a proof shown in this handout: https://people.eecs.berkeley.edu/~sinclair/cs271/n15.pdf

It's hard to understand why this technique works so well without digging deep in the math. Roughly speaking, if you throw n balls in n bins at random, the maximum of number balls in any bins will grow surprisingly quickly (because of the birthday paradox). However, if we allow ourselves to choose between two random bins instead of one, and put the ball in the one with the fewest balls in it, the maximum number of balls in any bins grow much more slowly (i.e., O(ln ln n)). Hence, having that one extra random choice allows us to get surprisingly close to the optimal approach of comparing all bins (which would give us O(1)), without doing all that work.


Thanks for the explanation! Much clearer and I get the concept. In the case of load balancing, we'd need a ton of servers (1000s?) for this to pay off vs just comparing all, right? Cache updating aside, most of the overhead would be in reading the load numbers in. Comparing a thousand numbers has to be quick in comparison, no?


The problem with load balancing is herd behavior. Stats for load are usually at least a little stale, because it's a distributed system where you can't afford to wait for consistency. When there are traffic spikes a whole herd of new connections will go to the least loaded server for a window of time where the cached "load" number is out of date. Picking two at random helps keep from a bunch of connections racing to one server, even when you're only running 3-4 of them.


That's a really intuitive explanation. Appreciate that.


Thank you sir!


The method is called "Power of Two Random Choices" (http://www.eecs.harvard.edu/~michaelm/postscripts/handbook20...). And the two-choices paradigm is widely applicable beyond load balancing. In particular, it applies to hash table design (e.g. cuckoo hashing) and cache eviction schemes (https://danluu.com/2choices-eviction/).


You're right, I updated the title. Got a little too clever with the whole "power" thing.


"while each additional choice beyond two decreases the maximum load by only a constant factor"

Mathemagical!


It also works for solving SAT. Try two literals and recurse on the one that can be made to satisfy more clauses.


What happens when the number of servers changes? The cache hit rate would likely drop to zero until it warms up again, which is a good way to accidentally overload your systems.

Load balancing based on consistent hashing is the better way to implement this.


When the number of server changes you slowly ramp up from mod n to mod (n+1). Flip a biased coin for each user to decide whether to use n or n+1, slowly crank up the bias to the n+1 side.


Consistent hashing is a bit cleaner way to do it, but pretty much the same result as modulo-ing the user id against number of servers. At least as I understand it, you consistently hash something (a user id, a request URL, etc) into N buckets, where N is the number of servers, so changing N re-shuffles all of the buckets anyway.

Short of something like cassandra's ring topology, how would you use consistent hashing add new servers and assign them requests?


You are missing a crucial piece here to have consistent hashing: you also need hash the names of the servers. With consistent hashing you hash both the names of the requests and of the servers, then you assign the request to the server with closest hash (under the modulus). With this scheme, you only need to remap 1/n of the keys (where n is the number of servers).


You're kind of right. You can also use something like jump consistent hash [0] which only requires you to have a consistent ordering of the hosts where you're sending the information. We (Facebook) use something similar for our caches. It requires a linear array of hosts but you've already got that if you're load balancing.

[0] https://arxiv.org/abs/1406.2294


That makes a lot of sense, thanks.

Better consistent hashing means that existing servers don't have their caches invalidated, but the new servers that were just added start with empty caches anyway so are fielding all uncached requests. Hopefully the bottleneck is actually with some shared layer behind it (a database or something) otherwise I guess you'd need to come up with a more complex way to slowly distribute more traffic to the new nodes.


Proper consistent hashing will only move (on average) K/N assignments when going from N to N+1 servers, for K original assignments.



If your employer 401k plan allows it, you can roll over after-tax 401k contributions to a Roth IRA (a.k.a. mega-backdoor Roth IRA).


Like Lights Out, this game can be solved using Gaussian elimination over Z/2 (integers modulo 2). This means the order of the tiles you press do not matter, and that you can solve the board by forming the desired pattern row by row.


Sorry how does it follow that you can form the pattern row by row? When you write this down as a linear algebra problem you kinda forget about the structure of the board, and the rows of the resulting matrix you apply Gauss elimination to don't correspond to the rows of the game board. So you must have meant something else that I'm missing, but I'm confused how you would solve just the bottom row without touching any others.


For every boards are two chessboard patterns we can solve for. For example, for a 3x3 board we can have:

  101        010
  010   or   101
  101        010
where 1 is a white square and 0 is a black square.

We can represent any N×N board as a N² vector. So, we can represent one of the chessboard pattern as:

  010
  101 = 010101010 = y
  010
At the beginning of the game, the board is at some random initial pattern.

  110
  111 = 110111010 = y0
  010
The goal of the game is to find the necessary moves to form the chessboard pattern. Hence, we want to solve the equation

  (y - y0) = Mx
where M is a N²×N² matrix representing the effect of every possible moves. Pressing on a tile is equation to addition by 1 modulo 2.

  1 + 1 = 0 (mod 2)
  0 + 1 = 1 (mod 2)
For example, pressing the top left tile is the same as adding this effect vector to the board vector:

  110110000
If we do this for all 9 tiles of the 3x3 board, we find the matrix M to be:

  110110000
  111111000
  011011000
  110110110
  111111111 = M
  011011011
  000110110
  000111111
  000011011
where the i-th row is the effect of pressing the i-th tile. Now, we can just solve for x (assuming M is not singular). Re-using our above example, we find

  (y - y0) = Mx
  (010101010 - 110111010) = Mx
  100010000 = Mx
  111100100 = x   (by Gaussian elimination)
This means we can press the following tiles (in any order) on the board to solve it:

  111
  100
  100
If you are interested, I wrote some quick demo code that shows how this could be implemented in Python: https://gist.github.com/avassalotti/7452318c9e9b8dec637bc7c0...


This is a very nice exposition of the strategy, but I still don't see how it answers my more specific question, about whether you can solve the problem row by row (is the grid).


Sorry, it turns that was misunderstanding from my part. It turns out we cannot solve a board row by row without revisiting the previous rows. The board below is a counterexample:

  101
  010
  100
The first and second rows are solved. However we cannot solve the last row without re-doing the first and second rows. The two solutions of this board shows this:

  solution #1:
  110
  100
  000

  solution #2:
  110
  110
  000
Actually, seeing my mistake made me challenge my assumption that a singular matrix over the reals might not be over integers modulo 2. This is likely wrong too. I don't know much about abstract algebra (and I am not a mathematician). Wikipedia (https://en.wikipedia.org/wiki/Determinant#Square_matrices_ov...) states "the reduction modulo m of the determinant of such a matrix is equal to the determinant of the matrix reduced modulo m."

The move matrix M for all boards of size 3n + 2 appears to be singular. This means these boards may have no solution or a large number of solutions.


Gotcha, thanks.

FYI, a singular matrix taking values in {0,1} that is singular over R if and only if it is also singular over Z/2, for exactly the reason you provide, that the determinant commutes with modding by 2.


This is interesting but I don't really understand because I've tried doing what you're saying. Maybe a side note: I'm very interested in learning algorithms but they are so inaccessible to me.


Have you taken linear algebra? It gives a structure for solving systems of linear equations and the parent post is noting that the game state is a linear function of which squares you've pressed, when considering the states of a square as either one or zero and doing arithmetic mod 2. So the standard techniques of linear algebra can solve this system, such as Gaussian elimination. However, you still have to write down the large matrix that encodes what each square does (an n x n board will need an n^2 x n^2 matrix which shows how each square affects all the others once pressed). Then you solve that system.

For this particular game there is probably a more intuitive way to directly solve it.


During the winter, most fresh vegetables in Canada are imported from Mexico and the US. These are mostly breed for shelf life and usually bland. In the summer, local fruit and vegetable can be quite tasty if you know where to shop. The public markets of Montreal are very good (http://www.marchespublics-mtl.com/). You can also subscribe to get weekly organic baskets from local farms during the harvest season (http://www.paniersbio.org/en/).


On the other hand, the bland stuff doesn't poison you as much with lead.

http://www.environmentalhealthnews.org/ehs/news/lead-in-vine...


In the U.S., Californian olive oils are your best bet. Otherwise, Costco Kirkland Signature olive oil (which is now Greek oil [1]) is another good choice.

Outside of Italy, it's difficult to find Italian olive oils that are not blended. The 2014 and 2015 harvest seasons have been terrible due a bacterial blight [2] that crippled many olive farms in Southern Italy.

[1] http://www.seattletimes.com/business/retail/costco-embraces-...

[2] https://en.wikipedia.org/wiki/Xylella_fastidiosa


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: