Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The Manuscripts of Edsger W. Dijkstra (utexas.edu)
217 points by nathan-barry 17 hours ago | hide | past | favorite | 84 comments




The most important one in the context of 2025 is this one:

On the foolishness of "natural language programming". https://www.cs.utexas.edu/~EWD/transcriptions/EWD06xx/EWD667...


Thanks for the link. Great read.

Apparently Dijesktra loved using em-dash!


If you look at the scanned pdf from what looks like a typewriter written document, he was using two hyphens everywhere. Links on the top of the page.

> when judging the relative merits of programming languages, some still seem to equate "the ease of programming" with the ease of making undetected mistakes.

This hits so hard. cough dynamic typing enthusiasts and vibe coders cough


> Another thing we can learn from the past is the failure of characterizations like "Computing Science is really nothing but X", where for X you may substitute your favourite discipline, such as numerical analysis, electrical engineering, automata theory, queuing theory, lambda calculus, discrete mathematics or proof theory. I mention this because of the current trend to equate computing science with constructive type theory or with category theory.

https://www.cs.utexas.edu/~EWD/transcriptions/EWD12xx/EWD124...


I’m not sure that is the focus of most serious dynamic language. For me, it’s the polymorphism and code re-use it enables that the popular static languages generally aren’t close to catching up to.

I’m curious, can you give an example that wouldn’t be solved by polymorphism in a modern statically typed OO language? I would generally expect that for most cases the introduction of an interface solves for this.

Most examples I can think of would be things like “this method M expects type X” but I can throw in type Y that happens to implement the same properties/fields/methods/whatever that M will use. And this is a really convenient thing for dynamic languages. A static language proponent would call this an obvious bug waiting to happen in production when the version of M gets updated and breaks the unspecified contract Y was trying to implement, though.


I love that essay. It's such a joy to read, and even though it is very short and to the point it says so much both about the topic itself and society at large.

And its just so obviously correct.


It is not obviously correct. The unstated premise is that programming is - or should be - similar to writing mathematical proofs.

The proofs/programs (Howard/Curry) correspondance has been fairly well established I think.

It is a bit like architecture is just physics or painting is just chemistry. Technically true in some reductionist sense, but not necessarily the most useful way to think about it.

It's the requirements discovery phase that always breaks every pure mathematical treatment of software development.

(And also, like DW points, that software is way more complex. But on this case, it's the requirements discovery.)


That’s because it’s not pure math, but applied math. Which also has the requirements discovery phase.

Suffers from philosophical liberalism I'd say

Which leads to the assumption that some people "just" dont want to be better.

This mental framework is how society justifies the superiority of some persons while ignoring the material realities of others.

This framework is the root of classism, the root of racism, the root of elitism and finally it manifesta in individuals as narcissism.


IMHO you are going way way way to far. Far in the weeds.

Nah, I think they're probably seeing warning signs. So much Djikstra's writing is pseudo intellectual pretensious word salad. Which is a shame bc he was an actual intellectual.

This is gold! Thanks.

> As a result of the educational trend away from intellectual discipline, the last decades have shown in the Western world a sharp decline of people's mastery of their own language

Dijkstra already wrote this in the 80s and today many teachers still complain about this fact. I also know that, at least in the Netherlands, the curriculum is judged based on the percentage of students that pass. If too few students pass, then the material is made easier (never harder!), so you can imagine what happens if this process continued for half a century by now.


South Africa is a sad example of this. And so systems are deteriorating country-wide.

Something which I occasionally link to, is this: <https://www.cs.utexas.edu/users/EWD/ewd08xx/EWD831.PDF>. It not only shows why computer languages should start their indexes at 0 (instead of 1), but also shows why intervals should be specified as lower-inclusive and upper-exclusive.

That particular EWD is one of my pet peeves, because of how it always pops up in discussion about array indexing. There are several situations where 1-based indexing is better, but which Dijkstra doesn't mention. For instance, one-based is much better for iterating backwards.

I think a compelling argument can be made that 0-based is better for offsets and 1-based is better for indexes, and that we should not think of both as the same thing. https://hisham.hm/2021/01/18/again-on-0-based-vs-1-based-ind...


One-based is not better for iterating backwards.

Zero-based indexing is naturally coupled with using only half-open ranges.

When using zero-based indexing and half-open ranges, accessing an array forwards, backwards or circularly is equally easy.

In this case you can also do like in the language Icon, where non-negative indices access the array forwards, while negative indices access the array backwards (i.e. -1 is the index of the last element of the array, while 0 is the index of the first element).

In languages lacking the Icon feature, you just have to explicitly add the length of the array to the negative index.

There is absolutely no reason to distinguish offsets and indices. The less distinct kinds of things you need to keep in mind, the less chances for errors from using the wrong thing. Therefore one should not add extra kinds of things in a programming language, unless there is a real need for them.

There are things that are missing from most languages and which are needed, e.g. separate types for signed integers, unsigned integers, modular numbers, bit strings and binary polynomials (instead of using ambiguous unsigned integers for all the 4 latter types, which prevents the detection of dangerous errors, e.g. unsigned overflow), but distinguishing offsets from indices is not a useful thing.

Distinguishing offsets and indices would be useful only if the set of operations applicable to them would be different. However this is not true, because the main reason for using indices is to be able to apply arithmetic operations to them. Otherwise, you would not use numbers for indexing, but names, i.e. you would not use arrays, but structures (or hash tables, when the names used for access are not known at compile time).


The problem is that half-open ranges work best when you the start is closed and the ending is open. In forward iteration we use [0,n) but for backwards iteration we have to use (-1, n-1] or [0,n-1], both of which are kinda clunky.

One should always use a single kind of half-open range, i.e. with the start closed and the ending open.

The whole point here is to use a single kind of range, without exceptions, in order to avoid the errors caused by using the wrong type of range for the context.

For backwards iteration the right range is [-1,-1-n), i.e. none of those listed by you.

Like for any such range, the number of accessed elements is the difference of the range limits, i.e. n, which is how you check that you have written the correct limits. When the end of the range is less than the start, that means that the index must be decremented. In some programming languages specifying a range selects automatically incrementing or decrementing based on the relationship between the limits. In less clever languages, like C/C++, you have to select yourself between incrementing and decrementing (i.e. between "i=0;i<n;i++" and "i=-1;i>-1-n;i--").

It is easy to remember the backwards range, as it is obtained by the conversion rules: 0 => -1 (i.e. first element => last element) and n => -n (i.e. forwards => backwards).

To a negative index, the length of the array must be added, unless you use a programming language where that is done implicitly.

In the C language, instead of adding the length of the array, one can use a negative index into an array together with a pointer pointing to one element past the array, e.g. obtained as the address of the element indexed by the length of the array. Such a pointer is valid in C, even if accessing memory directly through it would generate an out-of-range error, like also taking the address of any element having an index greater than the length of the array. The validity of such a pointer is specified in the standard exactly for allowing the access of an array backwards, using negative indices.


What would you do if your array is so large that it requires an unsigned int64 index?

The current AMD64 specification only uses 48-bits of pointer space, coming from 40-bits. So we still have 16 bits remaining. I'm sure we can use 1 for a sign.

And Dijkstras argument is actually quite weak if you read carefully. But he has a certain way of writing which make it seem almost like a mathematical proof. And then he sprinkles some “only smart people agree with me” nerd baiting.

This is a matter of opinion. I consider Dijkstra's arguments quite strong.

Some decades ago, I have disassembled and studied Microsoft's Fortran compiler.

The fact that Fortran uses 1-based indexing caused a lot of unnecessary complications in that compiler. After seeing firsthand the problems caused by 1-based indexing I have no doubt that Dijkstra was right. Yes, the compiler could handle perfectly fine 1-based indexing, but there really was no reason for all that effort, which should have been better spent on features providing a serious advantage for the programmer.

The use of 1-based indexing and/or closed intervals, instead of consistently using only 0-based indexing and half-open intervals, are likely to be the cause of most off-by-one errors.


In my view, zero based is good for making hardware. Resetting a pointer to zero allows using the same circuit for each bit.

Granted, that's an argument for hardware, not for languages, and even the hardware angle is probably long obsolete.


While I don’t disagree with his argument for preferred conventions in an era of accumulators/gp registers, I am surprised he didn’t call out why Fortran used 1,

The IBM 704's index registers were decrementing, subtracting their contents from an instruction's address field to form an effective address.

Type A instructions had a three bit tag, indicating which index registers to use.

With those three indexes you get Fortran’s efficient 7D column major arrays. This made data aggregation much more efficient and Cray/Cuda/modern column oriented DBs do similar.

But for Algol-like languages on PDP inspired hardware I do like his convention, if like the example you have a total ordering on the indexes.

But Fortran also changed the way it was taught, shifting from an offset from memory that was indexed down from an address, which makes sense when the number of bits in an address space changes based on machine configuration, to index values.

Technically Fortran, at least in word machines meets his convention.

       pointer + offset <= word < pointer + offset +1
It is just the offset is always negative.

FORTRAN was defined a few years before IBM 704 (in 1954).

The use of 1-based indexing was just inherited from the index notation used for vectors and matrices in most mathematical journals at that time.

When IBM 704 was designed (as the successor of IBM 701 and taking into account the experience from IBM NORC), it was designed to allow an efficient implementation of FORTRAN, not vice-versa.

The column-major ordering of FORTRAN allows more efficient implementations of the matrix-vector and matrix-matrix products (by reading the elements sequentially), when these operations are done correctly, i.e. not using the naive dot product definitions that are presented in most linear algebra manuals instead of the useful definitions of these operations (i.e. based on AXPY and on the tensor product of vectors).


Seeing book sections or chapters starting with zero, always confuses me. I know that this convention is probably inspired by the fact that the addresses of memory locations start with zero. But that case was due to that fact one of the combination of the voltages can be all zeros. So, it's actually the count of combinations, and I don't think it can be used for ordinal enumeration of worldly things such as book chapters, or while talking about the spans in space and time (decades, centuries, miles etc). There is no zeroth century, there is no zeroth mile and there is no zeroth chapter. In case the chapter numbers are not meant be ordinal, then I think it would be odd to call Chapter 3 as fourth chapter.

If you’re at a corner and someone asks for directions, you say “three blocks that way”. That means three blocks starting from here.

Then what do you call “here”?

The name for where you start from in this scenario is usually not required because it’s obvious what you mean and everyone understands the first block means you have to first walk a block, not that where you start is the first block.

So in that sense yes we have a zeroth chapter. That’s when you’re at the beginning of the first one but haven’t read all the way.


Folks ... cardinal and ordinal numbers both have "just so" stories to support them. We're unlikely to eliminate either one of them today.

"here" is definitely not a zeroth block. As soon you start walking, you are in the first block. However, if you are numbering the separations (cuts) between the blocks, you can number that "here" as zero.

Ok as soon as you start walking your are in the first block, I agree. So then where are you before that? What block were you at before you started moving, when you were giving directions?

What is the name of the block from which you left to enter the first block? Before you started walking I mean.

And mustn’t that block be before that other first? When we move from where we start we count up, so then mustn’t an earlier block be counting down? Counting down would mean a number smaller than one.

And are blocks not counted in units, as whole numbers?

So would it not be the case that one block less than 1 must be by necessity the zeroth block?

In other words if you agree that “as soon as you start walking, you are in the first block”, then you must also agree that before you left you began in the zeroth block.

How else could it be interpreted?


Before starting to walk, you were at the start of the first block, not at zeroth block. There is no block prior to first block. Otherwise that block would be called as first block.

Think of jogging on a road. When you are at the beginning of the road, you are at the start of the first mile, not in the zeroth mile. It doesn't have one more mile prior to first mile.


O you’re right. How could I forget the first minute of each day is 12:01, or that a previously unknown computer exploit is called a 1-day exploit.

And everybody knows a pandemic starts with patient 1!


The first element in a collection at address 15 is at address 15. The offset of an element from the start is addr-start, so 15-15=0 for the first, 16-15=1 for the second, etc.

that's why we start from 0, not because of voltages, at least in compsci.


This is all mostly about cuts and spans in a continuum. Cuts can be numbered starting with zero, but spans can't be. Book chapters are spans of content.

Usually the chapter 0 is preliminary or prerequisite material. It makes sense in an obvious and intuitive way if you want an ordinal "before the first", even if that sense isn't a rigorous mathematical one (although I think there's no problem with it).

I guess the practice was influenced by computer science - I don't know of an example that precedes it, but one fairly early one I've found is Bishop and Goldberg's Tensor Analysis on Manifolds from 1968, with a chapter 0 on set theory and topology. Back then the authors felt the need to justify their numbering in the preface:

"The initial chapter has been numbered 0 because it logically precedes the main topics"

Quite straightforward.

There's also the "zeroth law of thermodynamics", which was explicitly identified long after the first, second, and third laws, but was felt more primary or basic, hence the need for an "ordinal before the first"


Hopefully they don't discover another more fundamental law, to be called as "minus oneth" law

The reason is that, for an array (or vector), you find the memory position for the i-th element with the base address + i*word_length. And the first element is in the base address - so has index 0.

It has memory offset 0, which we use as the array index for convenience so that there's no distinction between a memory offset-base and the corresponding array index-base. That's what happens when your arrays are barely different from pointers, as in C. If your arrays aren't just a stand-in for raw pointers, then there's little reason to require 0-based indexing. You can use more natural indexes based on your particular application, and many languages do allow arbitrary indices.

Building floor numbers in at least a few countries I’m aware of start from zero or “G” ( or the local language equivalent for “ground“) with 1 being the first story above the ground.

I think you’re just biased to think that starting must “naturally” begin with 1.

Zero is just a good a place to start and some people do start counting from zero.


The floor number case arises so because traditionally it is the count of "built" floors. So, ground is technically not a floor in that sense. Also, if the floor indicates a separation (cut) between the living spaces, ground floor can be numbered as zero, just like the start point of a measuring tape is numbered as zero.

A zeroeth century sounds reasonable to me.

There is however the zeroth element of a vector in most programming languages.

Zero is not an ordinal number. There can be a vector element indexed with zero, but it is not "zeroth" element. Book chapter numbers are ordinal numbers.

But what is there to gain with this distinction?

Just the convenience of having an ordinal number to say? Rather than saying "chapter 0, chapter 1, chapter 2" one can say "the fourth chapter"? Or is it the fact that the chapter with number 4 has 3 chapters preceding it?

On first glance I find this all rather meaningless pedantry.


If I use ordinal numbers to count, then counting tells me the number of objects. Sometimes I want to know the number of objects.

EDIT: Yeah, I don't know why book chapter labels shouldn't start with "0". It seems fine to me. They could use letters instead of numbers for all I care.


If they use letters instead of numbers, note that letter "A" is the first alphabet, not zeroth alphabet.

When I'm counting letters it's more convenient to go "one, two, three." When I'm finding the offset between letters it's more convenient to go "zero, one, two." Neither of these methods is going to displace the other.

Definitions are fine, and I agree that "A" is the first letter. But that's no use to people who need to think clearly about the offset between "A" and "C" right now. Should I tell them they're wrong, they have to count to three and then subtract one? Because the dictionary says so?


Offset is an answer to the question "where does Nth memory location start from?". The answer is "after N-1 locations". It's the count of locations that need to be skipped by the reader, to reach the start of Nth memory location.

Book chapters and page numbers are not offsets.


Dijkstra wrote a rather famous screed against 1-based indexing, so it's more of an inside joke.

You're also wrong about there being no 0th mile. https://www.atlasobscura.com/places/u-s-route-1-mile-0-sign


I really enjoy having him recall the design of a computer with the first interrupt: https://www.cs.tufts.edu/comp/150FP/archive/edsger-dijkstra/...

For the mathematically inclined, EWD717 and EWD765 have two really cool problems.

A while back someone posed EWD765 for an alternate solution, I don't recall if any other solution was found. That was my introduction to these.

[717]: https://www.cs.utexas.edu/~EWD/ewd07xx/EWD717.PDF

[765]: https://www.cs.utexas.edu/~EWD/ewd07xx/EWD765.PDF


I'm amused at EWD498 - How do we tell truths that might hurt? https://www.cs.utexas.edu/~EWD/transcriptions/EWD04xx/EWD498...

    Besides a mathematical inclination, an exceptionally good mastery of one's native tongue is the most vital asset of a competent programmer.

    ...

    The use of anthropomorphic terminology when dealing with computing systems is a symptom of professional immaturity.

    ...

    Projects promoting programming in "natural language" are intrinsically doomed to fail.
I'd also recommend EWD1305 https://www.cs.utexas.edu/~EWD/transcriptions/EWD13xx/EWD130...

    Answers to questions from students of Software Engineering
    [The approximate reconstruction of the questions is left as an exercise to the reader.]

    ...

    No, I'm afraid that computer science has suffered from the popularity of the Internet. It has attracted an increasing —not to say: overwhelming!— number of students with very little scientific inclination and in research it has only strengthened the prevailing (and somewhat vulgar) obsession with speed and capacity.

    Yes, I share your concern: how to program well —though a teachable topic— is hardly taught. The situation is similar to that in mathematics, where the explicit curriculum is confined to mathematical results; how to do mathematics is something the student must absorb by osmosis, so to speak. One reason for preferring symbol-manipulating, calculating arguments is that their design is much better teachable than the design of verbal/pictorial arguments. Large-scale introduction of courses on such calculational methodology, however, would encounter unsurmountable political problems.

Dijkstra was so based.

I love the timeless ”Threats to computer science” https://www.cs.utexas.edu/~EWD/transcriptions/EWD08xx/EWD898...

Also the burn in the beginning of EWD899 (not transcribed) is noteworthy:

A review of a paper in AI. I read "Default Reasoning as Likelihood Reasoning" by Elaine Rich. (My copy did not reveal where it had been published; the format suggests some conference proceedings. If that impression is correct, I am glad I did not attend the conference in question.

https://www.cs.utexas.edu/~EWD/ewd08xx/EWD899.PDF


I once had one of his quote on the back of my business card when I was doing a lot of software dev consultancy: "Computer Science is no more about computers than astronomy is about telescopes".

I keep meaning to sit down with this site and make my way through it all. Might make more progress if I grab them into an eReader-friendly format and then peruse them more easily when travelling.


Astronomy is not named "Telescope Science" though. ;-)

In Europe Informatics is more common than CS.

You’re only half serious, but this is actually a good point.

The problem with that quote is that all of us reading this are telescope operators, not astronomers. The quantity and quality of our telescope photos is what we are paid for so we have no choice but to know our chosen brand of telescope inside and out.

I was taught at UT. Apparently Djikstra would make his students take exams with pens instead of pencils.

Less likely to make mistakes if you can’t erase


Truly a treasure trove … unfortunately, much of the wisdom from people like Dijkstra seems to have been forgotten or ignored by the software engineering industry.

Since I've been playing around with AI a lot lately, I'd suggest taking a few papers and uploading them for context...seeing good examples vastly improves their subsequent programming ability.

This is a treasure (it’s been around quite a while). For those youngers out there: still completely relevant. Still ahead of the game, imho.

I've read them all. While they are fun to read as their commentary come from a place of logic, there is a lot of emotion baked in and little room for being open minded about potential alternatives that could find their ways to reality. Dijkstra was very smart but you can tell thinking is a little closed, which is not objectively bad, but it happens a little too much for my taste.

I love Dijkstra’s writings, but, yes, he had very strong opinions that at times were abrasive. Alan Kay said it best when he said, “arrogance in computer science is measured in nano-Dijkstras.”

Some famous Dijkstra quotes: “It is practically impossible to teach good programming to students that have had a prior exposure to BASIC: as potential programmers they are mentally mutilated beyond hope of regeneration.”

“Object-oriented programming is an exceptionally bad idea which could only have originated in California.”

As a UC Santa Cruz masters alum, my favorite Dijkstra quotes come from notes from his visit to UCSC in the 1970s (https://www.cs.utexas.edu/~EWD/transcriptions/EWD07xx/EWD714...):

“I found the UCSC campus not an inspiring place, and the longer I stayed there, the more depressing it became. The place seemed most successful in hiding all the usual symptoms of a seat of learning. In the four-person apartment we occupied, only one of the four desks had a reading lamp, and the chairs in front of the desks were so low that writing at the desks was not comfortable. Probably it doesn't matter. Can UCSC students write? Do they need to? The notice boards showed ads from typing services "Grammar and spelling corrected.". (One of these ads itself contained a spelling error!)”


> I love Dijkstra’s writings, but, yes, he had very strong opinions that at times were abrasive. Alan Kay said it best when he said, “arrogance in computer science is measured in nano-Dijkstras.”

https://news.ycombinator.com/item?id=11796926

    alankay on May 30, 2016 | next [–]

    This quote keeps on showing up out of context. Edsger and I got along quite well. He loved to be the way he was and pushed it. ...
(and yes, I left that out of context so that people would go read the whole thing)

I really enjoyed the this one:

Some meditations on Advanced Programming

https://www.cs.utexas.edu/~EWD/transcriptions/EWD00xx/EWD32....

As I'm currently in a Functional Programming course in Haskell... This resonated.

I know that we'll always need to write programs which directly interface with memory.

However, when we don't need to do that... Maybe we shouldn't write programs in this style (i.e. imperative). Maybe we shouldn't even use an imperative language (I know, that's a stretch, many languages have incorporated functional aspects and we can utilize them instead of trying to avoid the language entirely).

---

Dijkstra ends EWB 32 with:

"Smoothly we have arrived at the third component of our tool, viz. the language: also the language should be a reliable one. In other words it should assist the programmer as much as possible in the most difficult aspect of his task, viz. to convince himself —and those others who are really interested— that the program he has written down defines indeed the process he wanted to define."

"As my very last remark I should like to stress that the tool as a whole should have still another quality. It is a much more subtle one; whether we appreciate it or not depends much more on our personal taste and education and I shall not even try to define it. The tool should be charming, it should be elegant, it should be worthy of our love. This is no joke, I am terribly serious about this. In this respect the programmer does not differ from any other craftsman: unless he loves his tools it is highly improbable that he will ever create something of superior quality."

At the same time these considerations tell us the greatest virtues a program can show: Elegance and Beauty."

---

Functional languages... help us achieve these aims.


You should read the correspondence between Dijkstra and John Backus and contrast that with your view of Functional Programming.

https://medium.com/@acidflask/this-guys-arrogance-takes-your...


Alas, I live an a world where efficiency does actually matter, and elegance to me includes efficiency. I live in a world of embedded software, portability, and reliability. In this regard, almost every single functional language is an utter failure, because they require runtimes and big fat common libraries. Even golang is borderline. Haskell has little chance.

Generally I think this does answer the question about why functional languages don't dominate more than they do - although you could make an argument that JavaScript is a functional language, and it certainly is enjoying a lot of dominance these days. JS environments aren't known for being particularly efficient, though. To me, efficient use of resources is elegant, and a language needs to be able to do that.


You brought up something interesting. I believe academic computer science originated from at least three cultures: pure mathematics (Church, Turing, Kleene, Dijkstra), electrical engineering, and psychology (Licklieder). I say “at least” since there may be other cultures I’ve overlooked. These three cultures have different views on programming: the EE-based culture emphasizes taking full advantage of the underlying hardware, the math-based culture emphasizes proof, and the psychology-based culture emphasizes human factors.

The challenge is reconciling these three views of programming: the holy grail is a programming language that is ergonomic and expressive, yet is also amenable to mathematical reasoning and can be implemented efficiently. I wonder if there is a programming language theory version of the CAP theory in distributed systems, where one compares performance, ease of mathematical reasoning about code, and human factors?


Your environment is probably not more constrained than this: https://github.com/Copilot-Language/copilot

the only meaningful contribution this guy made was his prose. certainly a talented constructor of sentences, i could never write as precisely as him.

but as far as meaningful technical contributions, i struggle to find anything. his path search algorithm, no offence, is self-evident.

for all the disdain he appears to have had for (what we now call) the 'move fast and break things' style of engineering/science, they were the ones that gave us everything today. you innovate by running experiments, not philosophising and writing proofs.

in retrospect he probably should have stayed on his initial discipline, theoretical physics.


The most charitable thing i can say about your comment is;

who had a fashion of calling every thing "odd" that was beyond his comprehension, and thus lived amid an absolute legion of "oddities." -- from "The Purloined Letter" by Edgar Allan Poe.

To dismiss you/your comment;

“Mediocrity knows nothing higher than Itself; but Talent instantly recognizes Genius.” -- from "The Valley of Fear" by Arthur Conan Doyle.

To deflate your claim of Dijkstra's Algorithm being "self-evident";

Students Struggle with Concepts in Dijkstra's Algorithm -- https://dl.acm.org/doi/fullHtml/10.1145/3632620.3671096

Edsger Dijkstra contributed to;

  1) Algol60 language/compiler 
  2) THE Operating System 
  3) Graph Algorithms (shortest-path etc.) 
  4) Concurrent Algorithms (semaphores, CSP etc.) 
  5) Distributed Algorithms (dining philosophers etc.)
  6) Fault-Tolerant Computing (which he called "Self-Stabilizing Systems") 
  7) Programming Language Design (GCL etc.) 
  8) Structured Programming Techniques. 
  9) Program Correctness Methodologies based on Predicate Calculus to derive Programs (weakest-precondition etc.)
  10) Essays giving insights into "How to Think and Reason Systematically" using commonsense and mathematical Tools.

Completely silly fact: knowing 0 about the guy except that he gave his name to the famous algorithm, I had somehow assumed he was Indian. Weird to see a white Dutchman in the picture.

All comes down to some degree of "linguistic intuition", that one acquires from not necessarily speaking foreign languages, but some exposure and proximity to them. My bet is that most Europeans, faced with "Edsger Dijkstra" would have instinctively pointed towards the general direction of Holland and upwards.

Funnily enough, I’m western European, have visited the Netherlands several times and I’m friends with some Dutch people.

No idea how this slipped by for so long.


(Curious) How did Edsger Dijkstra sound like an indian name to you?

I knew an Indian woman named Divya, perhaps my mind thought it looked similar in print?

I don’t think it was ever a conscious decision. It’s similar to how I always pictured Jane Austen as a sarcastic woman in her forties while reading her books, but she wrote her most famous works being borderline a teenager. Your mind just fills things up I guess.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: