"The research described here was conducted as part of the Stanford Integrated Digital Library Project, supported by the National Science Foundation under Cooperative Agreement IRI-94 11306. Funding for this cooperative agreement is also provided by DARPA and NASA, and by Interval Research, and the industrial partners of the Stanford Digital Libraries Project."
Possibly they refer to this: https://news.ycombinator.com/item?id=39172527“I received a link to a Google doc on slack recently, but the owner had forgotten to share permissions with me. Though I couldn't view the doc when I clicked it, I did notice that I could view the first page of the doc in the link preview. It was very high res and I could view the text clearly.”.
If so, pasting that link into Slack may reveal its first page.
It could even be a Google docs Slack app that has the bug of generating a preview if the sharer has permissions on the doc (and they usually do) and the preview generation enabled.
(Not the same guy but) I've definitely heard about this bug in the past, but I assume it is fixed now. I can't actually find a reference for it. If I find one within the hackernews comment edit window I'll add it here.
I'm interested in learning more about the benefits of MECE - I've never heard that before. In particular, it seems radically different from Divio's system [0], which presents the same information in many different ways.
(I'm engaged somewhat in trying to get our team to write any documentation; once I've got that, I'll start trying to organize along exactly these principles)
Yes, that's Diataxis (formerly Divio). I faced similar challenges and found that combining it with MECE principles in my PAELLADOC framework made documentation much easier, especially with AI tools. Good luck getting your team started
Great question about MECE vs Divio's system! They actually complement each other rather than conflict.
MECE (Mutually Exclusive, Collectively Exhaustive) comes from management consulting and focuses on organizing concepts without overlap or gaps. Divio's system focuses on documentation types (tutorials, how-to, reference, explanation).
In the AI era, your question raises a fascinating point: if AI can dynamically adapt content to the user's needs, do we still need multiple presentation formats? I believe we do, but with a shift in approach.
With AI tools, we can maintain a single MECE-structured knowledge base (optimized for conceptual clarity and AI consumption) and then use AI to dynamically generate Divio-style presentations based on user needs. Rather than manually creating four different document types, we can have AI generate the appropriate format on demand.
In my experiments, I've found that a well-structured MECE knowledge base allows AI to generate much more accurate tutorials, how-tos, references, or explanations on demand. The AI adapts the presentation while drawing from a single source of truth.
This hybrid approach gives us the best of both worlds: conceptual clarity for AI consumption, and appropriate presentation for human needs - all while reducing the maintenance burden of multiple document versions.
Because the thread was discussing CG becoming a commodity and Toy Story was the first thing that popped into mind for 90s CG; I have a vague recollection that it was the first feature-length full-CG film.
I only checked its production budget while writing my comment.
Actually, I picked the first CGI movie from the 90s, and it just happened to be good and very cheap.
But more importantly, the other half of my point was that $250 million ought to be enough to pay for a high effort production. It's not like "well Blender is free now so of course theatres are flooded with amateur CG films since their production has been commoditized".
Correcting for inflation (I used this tool by the US Bureau of Labor Statistics: https://www.bls.gov/data/inflation_calculator.htm), 30M USD in nov. 1995 would have a purchasing power equivalent to roughly 62M USD in feb. 2025. This is below half the budget of Moana 2 (150M USD, released in nov. 2024) for instance.
I would never use the official inflation numbers (they underestimate the actual inflation). It's easy to see that the most expensive movie ever made back in the day has a much lower budget that the most expensive movie made now, even adjusted for the official inflation rate.
There does seem to be a sort of sampling bias thing that I've only recently noticed, that I think does come from being older now. I started to get back into old retro games I used to play, and I can't help but realize how many games back then were really bad, like not worth playing at all, and I just cherry picked the good ones. And being older, I'm not into gaming anymore, or really much of a consumer at all besides essential goods, being younger you do consume more entertainment products, like games. So I think there's definitely some sampling bias going on here where things look like they're getting worse. Or it could be both things, like it could actually be getting worse, but also not as much as it looks like because of this sort of sampling bias thing. Like having to have multiple accounts, like a Switch account plus some special Switch account and/or another account to play a game, or you buy a game and then there's an online store as well, or you buy a game in person but you can't get a copy digitally, or you buy a digital copy and you can't get a physical copy made for you for a flat fee, or that increasingly people don't actually literally own things anymore and it's all subscriptions or some sort of permission to use, or that a lot of games are just remakes of older games, or that you can't play single player offline, or that you can't transfer or give your digital game that you "bought" and "own" to someone else (less it be a physical copy, obviously), etc.
I mean, I see no issue with comparing high profile old games with high profile new games. The thing is thst there's less high profile bad games becsuse... Well, back then when you put in that money you werre trying to go for quality, I suppose.
It also was because development budgets were microscopic compared to today, so a bad release from a dev team of 5 people and 12 months won't bomb as badly as a 500 person 5 year "blockbuster" release. So yeah, Superman 64 was laughably bad but didn't sink a company the way Condord or even a not-that-bad game like Saints Row would.
Economy is different, as is the environment. There's still quality, but when a game flops, it's a tsunami level flop and not just a painful belly flop.
If the neighbor kid (Sidney "Sid" Phillips) from toy story appeared in a modern movie of a similar budge (not even inflation adjusted) people would comment about the bad CGI.
Toy Story was a good idea because attempts at depicting humans with CGI at the time had a very plastic look.
I can't wait to read that blog post. I know you're an expert in this and respect your views.
One thing I think that is missing in the discussion about shared data (and maybe you can correct me) is that there are two ways of looking at the problem:
* The "math/engineering" way, where once state is identical you are done!
* The "product manager" way where you have reasonable-sounding requests like "I was typing in the middle of a paragraph, then someone deleted that paragraph, and my text was gone! It should be its own new paragraph in the same place."
Literally having identical state (or even identical state that adheres to a schema) is hard enough, but I'm not aware of techniques to ensure 1) identical state 2) adhering to a schema 3) that anyone on the team can easily modify in response to "PM-like" demands without being a sync expert.
I didn't quite follow how you can actually prove that you've solved a sudoku via reduction to graph coloring. If I understand correctly, an important part of the graph coloring protocol is that the prover permutes the colors between each round (otherwise the verifier can just iteratively learn the color of every node).
But all sudoku puzzles have the same graph structure - a puzzle instance is a partial assignment of colors to nodes.
So can't a verifier can gain knowledge about the prover's solution by asking for edges that correspond to known values?
The way the conversion is done here, different sudokus produce different graphs. Besides the regular sudoku graph structure, there are nine additional nodes, each corresponding to one number. They are all connected to each other to ensure they must be different and each one is connected to each cell where the corresponding number is present as a clue from the start. This way, the graph doesn't need any pre-coloring to still encode the sudoku including the given clues.
I haven't read this particular blog but the solution I remember seeing is you randomly swap the colors each edge verification so each is independent. All the edges are numbers that are required to be different so when you verify they are different you gain no information.
https://snap.stanford.edu/class/cs224w-readings/Brin98Anatom...
"The research described here was conducted as part of the Stanford Integrated Digital Library Project, supported by the National Science Foundation under Cooperative Agreement IRI-94 11306. Funding for this cooperative agreement is also provided by DARPA and NASA, and by Interval Research, and the industrial partners of the Stanford Digital Libraries Project."
reply