Hacker News new | past | comments | ask | show | jobs | submit login
Incident Pit (wikipedia.org)
68 points by v4n4d1s on Feb 12, 2017 | hide | past | favorite | 28 comments



Long time member of BSAC here. The classic example I use when describing this to students is torch batteries. If you're at home and you notice that your dive light batteries are flat, no biggie, just swap them out. If you notice while on the boat that's a bit more annoying, but the chances are someone will have a spare you can borrow, and if they don't, you will have to miss the dive, which is unfortunate but you're not in any danger. If you're in the water, you can still call the dive (abort and return to the boat). But if you're at 50m and entered a wreck and your buddy has just swum round a corner taking their light with them and you switch yours on and nothing happens, then you have the rest of your life to try and find the way out.

Same, seemingly trivial, failure, but very different consequences depending on at which point of the dive you notice it.

By the way I mainly dive with GUE now who mandate a primary light and two backups as part of standard kit!


I lived on a boat and traveled around North America for 2 years. We called what you describe here "the boat rule", which is to do something when you think of it because later on, the failure to do it will have increased negative consequences. Replacing flashlight batteries is a great example.


I came to the comments because I was struggling to "get it" from the Wikipedia page.

Your comment was what I needed.

Do you know if that example is somewhere that can be cited on the wiki to make it more accessible?


Wouldn't it be, underwater you find out your batteries perhaps are low in power, but you continue on since even if they go out you still have all your companions as backup.

But you just stared down the incident pit?

You still have multiple layers of security, but normalizing the removal or weaking of one goes against the point of having the multiple layers in the first place.

In your example you seem screwed immediately?


It's just an illustrative example. A real incident pit is dangerous because you need the experience to recognise one, and you need the courage to be the one that aborts a dive that people might have traveled a long way, paid money, etc to do. Fortunately you can just say "incident pit" and everyone will immediately perform their own independent analysis, if they're experienced.

The "problem" with modern regulated torch designs is that they don't appreciably dim as the power runs down; they're bright and then they just stop over the course of a few seconds. Old designs you could easily spot when the batteries were ready to be replaced/recharged well in advance.


Jesus Christ I think I just had a mini panic attack reading that.

I am clearly not good diver material.


Fellow BSAC member here.. We recently started calling it the "Pit of death"


Related is William Gibson's idea of "the Jackpot", or a "multicausal appocalypse", where a large catastrophe is caused not by a single major factor but by several smaller ones accumulating and interacting over time. Gibson talks about it in https://vimeo.com/116132074.

If this is the kind of thing you find interesting, you should also read How Complex Systems Fail (http://web.mit.edu/2.75/resources/random/How%20Complex%20Sys...).


A good book in this space is Normal Accidents[0], which describes (among others) the impressive cumulative failure that caused the Three Mile Island incident. Fascinating read and applicable to software systems as well.

[0]: https://en.wikipedia.org/wiki/Normal_Accidents


There is a specialized application of this concept that is sometimes used in airway management during anesthesia (specifically endotracheal intubation). It is referred to as the "vortex" approach (i.e. you don't want to get get pulled into the vortex as the longer you spend there, the harder it is to get out).

There is a well produced reenactment of an anesthesia team falling victim to the "vortex" (resulting in fatal injury to their patient): https://vimeo.com/103516601

In fire and EMS we generally refer to the concept of an "accident chain". In any event where rescue personnel are injured or killed, there is a chain of events that had to take place leading up to that accident. Breaking the chain at any point would prevent the accident from occurring, and many of our procedures are built around the idea of breaking accident chains as early as possible. This is a concept that (as far as I know) originated in the aviation industry.

https://en.wikipedia.org/wiki/Chain_of_events_(aeronautics)

It's the same basic idea though... The further along the chain you allow the event to progress (even if you don't know the end result), the less margin for error you have.


Wow, that video was tough to get through (excellent link!). As soon as her HR started dropping with low sats she was circling the drain on what should have been a routine, relatively straightforward surgery.

It's interesting to me that, also coming from an EMS background, my (armchair) reaction to that situation is to escalate the intervention much, much more rapidly (I would have tubed the patient as soon as her HR started dropping, and tried an OPA much sooner). I suspect that this has a lot to do with my internal mental framing of airway management: in the field, airway problems can quite quickly lead to death -- the framing emphasizes risk -- but in a hospital setting, especially a surgical one, mild-to-moderate airway problems are frequently encountered, and the framing emphasizes minimizing negative impact to the patient.


Yeah, in a prehospital setting we are a lot more sensitive to the fact that there is no cavalry coming.


Having a name for the concept seems helpful, but I don't find that diagram in any way enlightening.


Likely because you're viewing it more than 45 years after it was drawn. Our ideas about information visualization - and the tools available for creating or - were a lot less sophisticated back then.

I actually think the graph is great, once explained. It gives a good visual summary of both things it's trying to convey (the overall point, and the different speed at which incidents can develop).

Also interesting to think of this writ large. How many companies do you know of that are hanging out in the middle section without realizing how close they are to a point of no return?


A lot of graphs don't make any sense unless you view them as part of the talk they're presented in.

I see a lot of bureaucratic NHS slides that are meaningless until someone talks you through them.


I've been reading a sci-fi novel, "Pushing Ice" by Alastair Reynolds, and the "Incident Pit" term is used a lot in it.


Even though the term is not used in it, possibly the best sci-fi novel example of an Incident Pit is "The Wreck of the River of Stars" by Michael Flynn. A ship's crew independently make a set of small decisions, any one of which would not result in failure, but in total which result in the loss of the ship and much of its crew. It is much more character-driven than most sci-fi, since the story is basically about a breakdown of teamwork due to the crew's inaccurate perceptions of each other.


I'm just about to start this book. It's loaded up on the Kindle right now ready to go having aborted 'Man in the High Castle' about a third of the way through.

As a big fan of Alastair's work I'm hoping for a good read to get over the last failure!


If this is marketing for the book (also mentioned in the Wikipedia article) it's bloody well done.

It's probably not, since the book is 10 years old, but this is how it's done properly, I think.


I think this accurately describes American history for the past 15 years and explains why establishment hate is so mainstream.

Iraq war, housing bubble, housing collapse, bailouts, rise of ISIS, student debt explosion, ...

Eventually there is a point where confidence is lost.


For years I've used (and I'm most likely not the first to have coined this) the "Gravity Well of Fail" to describe situations that become increasingly more perilous due to badly chosen decision paths as time passes. I didn't actually know about the term "Incident Pit", or perhaps only vaguely remember this or a similar term from a couple of pals who are scuba divers.


It seems like "pot committed" (the poker term) is in the same vein.


"Pot committed" has a subtly different meaning. A player who is pot committed hasn't necessarily made a mistake and isn't necessarily in a bad situation; the pot has just grown so large relative to their stack that it's worth calling even with a very weak hand. If the pot is offering you 15/1 and you think your opponent has a 10% probability of bluffing, it's worth calling with ace high. It's a slightly tricky concept to apply, because a lot of players who think they're pot committed are just falling prey to the sunk cost fallacy.

Many startup founders are effectively pot committed. The potential reward of an acquisition is very high; the cost of staying in business is just the opportunity cost of your time. Running the company until either bankruptcy or acquisition often has a higher expected value than quitting to do something else, even if the situation looks dire.


Accidentally delete some data. Panic, reach for backups. Restore fails and it removes more data and takes service off-line.


This Quanta article on research into predicting disease outcomes strikes me as related

https://www.quantamagazine.org/20160830-who-will-die-from-in...



[pdf]


That's also a great visualization of how we build our society.

Don't fall off the track.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: