The problem with this article, though it presents model checking and state space exploration well, is that it doesn't present weak memory models[1], which are the reality on all multiprocessor/multicore systems these days. Thus is perpetuates misconceptions about concurrent programming. This is a poor example of model checking because it assumes sequential consistency and actually confuses people by reinforcing incorrect assumptions.
[1] With weak memory hardware, it is effectively impossible to write correct programs without using at least one of atomic read-modify-write or barriers, because both compilers and hardware can reorder both reads and writes, to a maddening degree.
(I previously posted this comment as a reply to a comment that got flagged.)
As I read it, it felt more like an exploration from state space stand point (and not an example of model checking) which to me sounded quite reasonable. Unusual and intuitive I'd say.
The author does start talking about model checking in the third paragraph and go on using "SPIN", so there's a significant part that is interested in model checking, anyway.
I can see where the parent is coming from.
I think you can both be right - it can be valuable in any case.
> Thus is perpetuates misconceptions about concurrent programming ... it assumes sequential consistency and actually confuses people by reinforcing incorrect assumptions.
That's not quite true. The article explicitly mentions problems with interleaving of instructions between two processes (aka threads) which is the fundamental problem of concurrency. If you consider only a single thread in isolation its program order is always maintained even though compiler optimizations and hardware OOO execution actually execute the instructions in data order.
> The article explicitly mentions problems with interleaving of instructions between two processes (aka threads) which is the fundamental problem of concurrency.
Again, with weak memory models, writes from another thread can be observed out of order due to hardware reordering, even without compiler optimizations. With weak memory models, there can be behaviors that correspond to no valid interleaving of two threads.
I am explicitly pointing out that what you are assuming, along with the author, what is often called sequential consistency (https://en.wikipedia.org/wiki/Sequential_consistency) hasn't been true of hardware for decades. It's a common misconception that most systems obey sequential consistency. Your comment just seems to repeat that again, and it isn't true. See here (https://preshing.com/20120930/weak-vs-strong-memory-models/) for a deeper explanation.
I used the phrase "single thread in isolation" to simply point out that parallelization is going on underneath (via optimization/OOO/superscaler) for that thread even in a single-threaded scenario but not that it always implies sequential consistency in a multi-threaded scenario. That was the thing i disagreed with in your comment where you said the author assumed sequential consistency only which is not what i got. In my linked to previous comment i explicitly said this;
Once you start thinking about a program as a sequence of loads/stores (i.e. reads/writes to shared memory) and note that Pipelining/OOO/Superscalar are techniques to parallelize these (and other) instructions for a single thread of control you start getting an idea of how sequential order can be preserved though the actual execution is not quite so.
IMO some of the confusion is in conflating execution with the natural properties of a program. Concurrency is the property of a program or algorithm, and this property allows you to execute the program with partial or complete re-ordering (which is the peak of what some people call embarrassingly parallel). How you wish to execute on this property is up to you.
Concurrency should always be studied from higher-level logical program/algorithm execution properties with a focus on both shared-memory and message-passing architectures. Only after the above should one even look at lower-level hardware memory consistency model, cache coherence model, atomic instructions, memory barriers etc. For example, learning to use a logical shared-memory "Mutex" is far easier than the fact that it is internally implemented using atomic instructions, memory barriers etc.
I disagree. If we want people to understand, then we should teach computer systems bottom-up and thus understand the bottom-level behaviors before being surprised that our bad (i.e. wrong) abstractions are leaking later. If you want to train programmers to accomplish parallel programming tasks via plugging together nice abstractions, that's all well and good, but that's a completely different kind of instruction than trying to teach them how things work. If we only do the latter then we'll never have experts again.
In my previous comment i linked to an earlier comment of mine which already pointed it out, so this is not revelatory. W.r.t. concurrency however, the logical approach should precede lower-level implementation approach since there is lots more complexity involved in the latter. Mutexes/Semaphores/Condition Variables etc. are not bad/wrong abstractions but necessary correct ones. They are fundamental to understanding/managing concurrency itself.
The issue is that logically correct algorithms can indeed be made, but actually making them work in real conditions is its own problem. It's not necessarily wrong to explore more exotic algorithms, but a focus on algorithms that people should actually use in their given situations seems higher priority. We should be working on bringing useful concurrency to our programs right now.
And while I will agree that mutexes and the like are useful in their own right, they are not the most fundamental for understanding concurrency.
> If you consider only a single thread in isolation its program order is always maintained even though compiler optimizations and hardware OOO execution actually execute the instructions in data order.
It's not in most programming languages. That's actually a big part of the linked Wikipedia article is when and how program order is allowed to be modified.
A great portion of the JVM memory model definition is dedicated to laying out exactly when and how read/write barriers are established and when/how ordering is enforced. Without any sort of barrier (in most cases) the JVM is fairly free to completely reorder program execution.
[1] With weak memory hardware, it is effectively impossible to write correct programs without using at least one of atomic read-modify-write or barriers, because both compilers and hardware can reorder both reads and writes, to a maddening degree.
(I previously posted this comment as a reply to a comment that got flagged.)