Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Am I stuck in a local maximum? (ploeh.dk)
136 points by dustinmoris on Aug 11, 2021 | hide | past | favorite | 108 comments


My theory is that these career evangelists have spent too much of their careers doing just that: evangelizing. They haven't actually been writing code or building software. They've just been telling everyone how they think they should be building software.

I have arrived at the same conclusions as you wrt. static vs dynamic typing. A sufficiently powerful static type system entirely subsumes dynamic typing, and there is quite literally nothing you can accomplish with dynamic typing that you cannot accomplish with such a static type system. After writing software using both dynamically and statically typed languages, this is the conclusion I have arrived at, and it feels like a no-brainer to me.

This is so much of a no-brainer to me that I can't help but be biased when someone disagrees. I want to think that I am open to having my mind changed, but I have yet to see any evidence that sways me.

I do think perhaps a lack of external demand for code quality and correctness might sometimes be at fault. When all you need to do is write the code for the happy path correctly, maybe you just feel static types get in the way. I've converted several javascript projects to typescript, and in every single one of those projects, the conversion exposed multiple(in the tens or sometimes hundreds) cases of incorrect handling of null/undefined or just type mismatches.

I know this has to do with culture and the specific company and engineering discipline, but what I'm trying to get at is that I think a lot of people out there are writing code this way: just hack away until it works, and fix it if it ever crashes. When this is all you want, maybe proving correctness to the compiler is too bothersome. But I don't and won't work this way.


https://news.ycombinator.com/item?id=22210073

From the HN discussion about the video of "A Conversation with Language Creators: Guido, James, Anders and Larry"

https://news.ycombinator.com/item?id=19568378

https://www.youtube.com/watch?v=csL8DLXGNlU

I posted these Anders Hejlsberg quotes, who co-designed TypeScript, C#, Delphi, Turbo Pascal, etc:

https://news.ycombinator.com/item?id=19568378

>"My favorite is always the billion dollar mistake of having null in the language. And since JavaScript has both null and undefined, it's the two billion dollar mistake." -Anders Hejlsberg

>"It is by far the most problematic part of language design. And it's a single value that -- ha ha ha ha -- that if only that wasn't there, imagine all the problems we wouldn't have, right? If type systems were designed that way. And some type systems are, and some type systems are getting there, but boy, trying to retrofit that on top of a type system that has null in the first place is quite an undertaking." -Anders Hejlsberg

[...]


I like what Anders Hejlsberg says here, but still wonder why he is not pushing for F# adoption/funding/tooling etc. I'm in no position to judge such an accomplished language designer in any way, just curious. Is it ego? NIH? Or that other thing (Whose name someone else can surely provide) about not being able to see the solution if your salary depends on the problem.


Is he in any way related to F#? I feel like he is completely focused on Typescript with C# slowing down with features and F# and VBasic are pretty much in maintenance mode


He is not, that's why i hinted at Not Invented Here.


It's almost unanimous that the general "null" from C is a very bad idea.


I know. C.A.R. Hoare who invented null, called it his Billion Dollar Mistake.

https://en.wikipedia.org/wiki/Tony_Hoare?wprov=sfla1


A bad bet which, as Anders Hejlsberg rightly pointed out, JavaScript doubled down on.


> just hack away until it works, and fix it if it ever crashes.

This also makes collaboration really hard. Types are a form of documentation. But unlike comments, types are enforced by the compiler. Invariably, lots of comments in a codebase are wrong. Many of those probably started out correct but the code changed and the comment didn't.


...to the point where you can improve code quality by deleting comments.


In Elixir (and Rust too IIRC) example code in comments is checked by the compiler.


To tack on to this:

Typing is never the limiting factor in my work. Like, if I had to type even double to accomplish the same program, I don't think that would actually slow me down. Given that, the major time difference between using static vs dynamic typing is the investment needed to go from 'throwing things at a wall and seeing what sticks' to understanding what you need to do and doing it.


I've found that "throwing things at the wall" can be very helpful sometimes. It isn't necessarily a linear path from designing a solution -> implementing it in code. Sometimes writing some preliminary code can help flesh out the design. It's a symbiotic process. I've wasted countless hours before trying to design a solution on paper when writing the code out and iterating on the code proved much faster.

And as for how this relates to static v dynamic typing, when I'm roughing out an implementation I much prefer to leave out types.


I find I can hack on things faster with static typing. I'm inevitably picking up new libraries, and using a new library is much easier when it's inputs and outputs are clear. I can see how dynamic would help if you're writing most things yourself, but static definitely has a huge return when interfacing with anything someone else wrote.


totally agree. That's why I think libraries and other mature code should use static typing, whereas I like to use dynamic typing for code that I am prototyping. This is why I love Typescript, you can have both :)


Anybody who thinks that avoiding extra typing is the point of dynamic langauges, or that working in a dynamic language is simply throwing things at a wall to see what sticks is completely missing the point.

The whole point of dynamic languages is that your problem space is in fact dynamic. In many domains your data simply defies any ontology you try and force it into. Static typing systems themselves are somewhat acknowleging this problem internally with the differences between nominal and structural based typing system.

Dynamic langauges are about changing your view on data where instead of shoving all your data into and out of boxes that don't fit, you just see entities as arbitrary collections of attributes and thus typing them makes little sense.


I'm sorry, but this is wrong. See alexis king's takes(which could be taken right out of my mouth, just more eloquent): https://lexi-lambda.github.io/blog/2020/01/19/no-dynamic-typ...

The gist is basically that a dynamic type system doesn't afford you any more power or "dynamicism" than a statically typed language.

At any point in time, when writing code, you always know something about your data. Even if it was quite literally totally unstructured JSON data with no rhyme or reason to its structure other than it being any valid JSON, then at least you know that much, and you can reify this in the type system: `data JSONVal = Map String JSONVal | [JSONVal] | JSONString String | ..`

It is impossible to write code that works on "any kind of data" because your code, and what it does with the data, makes the same kinds of assumptions about the shape of the data as static typing does. The only difference is that a static type system will let the compiler help you find obvious bugs at compile-time, while dynamic typing will force you write tests up the wazoo and the end result will still be something that is less robust wrt. refactoring than a statically typed language. The statically typed language just gives you this all for free.


Great blog post, great blog in general, but here's the highlight of all that for me:

"In contrast, most static type systems do not allow such free-form manipulation of records because records are not maps at all but unique types distinct from all other types. These types are uniquely identified by their (fully-qualified) name, hence the term nominal typing. If you wish to take a subselection of a struct’s fields, you must define an entirely new struct; doing this often creates an explosion of awkward boilerplate."

That is exactly the space I want to work in. I want to treat entities as anonymous entities without having to name them them since at that point I approach a 1:1 mapping of names to things in my system. I want to use generic data operations to merge and slice my entities which are just collections of facts about a thing. Structural typing is a step towards this programming model but still requires that you shove your data into some ontology.

EDIT: Actually I take that back, that blog post is totally missing the point. She asserts that the type system just makes explicit the implicit requirements of your code. The problem is in HOW the type system does that. By focusing on the function itself, the author ignores the real problem further upstream. If there's some type, then I have to shove my dynamic data into your type in order to use the function. And if you're trying to use interfaces, there's still some concrete type that you've created to attach that interface too. Either way, you haven't managed to support dynamic data through your system.


You do refer to specific attributes in the code, doesn’t you? So all in all you at least use a structural type with some given attributes. Even if you only ever use those attributes, you are already better off with regards to future changes, refactors, etc.


> shoving all your data into and out of boxes that don't fit

Ironically this is what dynamic languages do. Everything is in boxes that could be anything.

It's worth bearing in mind that dynamic languages can be (and often are) built using statically typed languages. You can easily build dynamic collections using static typing.


> I have arrived at the same conclusions as you wrt. static vs dynamic typing. A sufficiently powerful static type system entirely subsumes dynamic typing, and there is quite literally nothing you can accomplish with dynamic typing that you cannot accomplish with such a static type system. After writing software using both dynamically and statically typed languages, this is the conclusion I have arrived at, and it feels like a no-brainer to me.

I've come to the same conclusion. A dynamic language is a footgun that doesn't buy any additional power or expressiveness over a well-developed typed language.

It gets even worse when dynamic language developers start interacting with data. The "just do whatever" mentality that rejects type design also rejects relational design without any consideration of whether it's (almost certainly) a better choice than a document database.


I find the comment that dynamic typing rejects relational design funny when dynamic typing is what makes working with relational data relationally tractable. Static typing falls down almost immediately when trying to work with your relational data and ends up requiring reams of glue or mapping frameworks just to make your relational data approachable.


It's a feature, not a bug, that the code blows up when the database schema doesn't match my models.

Just as the necessity of DI frameworks isn't an argument against unit testing or the inversion principle, the necessity of ORM frameworks isn't an argument against static typing or relational databases.


It's not the schema matching your model, it's the result of any query.

How many different types do you need to represent the result of all possible projections on a single 10 attribute relation? Now join that to another 10 attribute relation, how many types to represent all possible projections of a single join between two relations? And Maybe is not the answer, Maybe is a lie. The data is either part of the result set or it is not based on the query.


You only need to map the database types to the language types at worst. So int, string, timestamps, binary, maybe GUID. Or you could just use string for all data and then you're pretty much matching a dynamic language.

Results from joins between arbitrary queries are just lists of these types per field.


The problem isn't the attributes. I WISH there was a dynamic language that would statically type your attributes, that would be the perfect hybrid for me.

The problem is typing the agregates. So for all projections from a single table, that's 2^9 types. For all projections for a join on two 10 attribute relations, 2^19. This is in the realm of yes you can do this in a static type system, but why?


Most ORMs don't let us do this, and it works well.

Getting back the entire relation (rather than some sub-tuple) isn't the kind of expense that causes problems scaling. It chews up network bandwidth between the database and the API, but it's not more computationally expensive (asymptotically).

And if my entities are dozens of columns wide, I'm probably not in 3NF anyway, so it'd be better to focus my efforts on getting there.


Why do you want to have a static type for each combination of fields in a query? There's no need to do this.

> I WISH there was a dynamic language that would statically type your attributes, that would be the perfect hybrid for me.

In statically typed database frameworks it's often the other way round: you have pseudo-dynamic typing for fields but they're static under the hood, sub-typed for the specific database type and/or stored as raw bytes.

This is basically "boxing" fields - exactly the same principle as dynamically typed languages do behind the scenes, except the library will have customised "boxing" designed for the subset of types the database supports, e.g., typed per field column rather than for every individual field in the result (like dynamic languages). This hugely reduces the overhead of actual boxing as you don't need to store the type of the field or indirect to its data.

This is sort of like dynamic typing just for the database, except the possible types are focused on a specific subset the database supports and therefore can be made efficient for the task.

Dynamically typed languages must cater for reassigning properties, fields, and/or types ad hoc, and must be made much more general - AKA slow.

For example you could (in pseudocode) do:

    procedure showTitles(queryResults: QueryResults, titleName: string):
      for title in queryResults.fieldData(titleName):
        let str = title.getString # Returns a string type.
        display str

    let queryResults = db.query("SELECT * FROM MYTABLE")
    showTitles(queryResults, "title")
You get the benefits of static typing and dynamic typing; type errors are caught at compile time, and you explicitly or implicitly convert the "dynamic" box for the field. For instance, trying to pass an object that isn't a `QueryResult` to the `showTitles` procedure will throw a compile time error. In dynamic languages, you could pass anything to that procedure and have to hope it doesn't blow up at run time. Testing all the possible paths and dynamic types that could be given to this procedure could be a combinational nightmare, so "to be safe" you'll have to check the type is the equivilent to a `QueryResult` anyway at run time...

The `getString` function can do whatever it needs to do to convert the data if it's not stored as a string (or throw an error if this isn't possible/appropriate), whilst maintaining type relationships once it's out of the pseudo-boxed type and ensuring memory used is appropriate to the "real" type.

Source: I've written database query frameworks for statically typed languages.


Not my experience at all. I am much more productive when using types to interact with data. And I have programmed in Lisp, JavaScript etc. so I have real world experience comparing the two world views. I wonder if you have actually worked with typed languages and relational data in an effective way?


> A sufficiently powerful static type system entirely subsumes dynamic typing, and there is quite literally nothing you can accomplish with dynamic typing that you cannot accomplish with such a static type system. After writing software using both dynamically and statically typed languages, this is the conclusion I have arrived at, and it feels like a no-brainer to me.

Do you thing some existing practical, high-usability programming language has a sufficiently powerful type system in the way you describe?

For me the big thing is that static type systems tend to be simultaneously too powerless to express what I need, and too laborious in demanding manual specification of things that I don't actually need to specify for the program to work. In a dynamic language with a good schema system like malli or spec in Clojure, I can have easily customizable verification that is opt-in.


I prefer dynamic typing when writing code that I perceive as "volatile". This is especially true when it's a piece of code that's currently only being worked on by me. Static typing adds boilerplate that feels cumbersome, especially when the types of the variables are already made obvious by the variable names.

However when a piece of code becomes mature, doesn't change often, and starts to be depended on by other people (eg a core library), then I start adding types for additional safety and documentation. This system of gradually adding types when needed is why Typescript is so great for me.


> Static typing adds boilerplate

That’s not necessarily true anymore. Languages like Haskell have both strong static typing and very good type inference.

> However when a piece of code becomes mature, doesn't change often, and starts to be depended on by other people (eg a core library), then I start adding types for additional safety and documentation.

That’s a practice in Haskell development as well, but the safety and guarantee of consistency are already present prior to the annotations.


type inference is a beautiful thing and I hope Typescript adds it someday


I can see that, but find myself with different preferences. When I know I'll be making changes, I want to be working in a setting where I can tell some checker what I'm assuming, so that it lets me know when work elsewhere invalidates that assumption. Assumptions that will be mechanically checked are assumptions I can (metaphorically) page out in my head as I focus on other things, to be alerted when they again become relevant.

I definitely agree that all of this becomes even more important when those assumptions need to be shared between the heads of different individuals.


> there is quite literally nothing you can accomplish with dynamic typing that you cannot accomplish with such a static type system.

Exactly, and all you need is Any

> a lot of people out there are writing code this way: just hack away until it works, and fix it if it ever crashes.

I'm willing to agree that Non-TDD hackers should go for static typing.


Tests are a very underpowered replacement for types. They require a lot more work to create, much more maintenance, and bring you much less confidence. Unity tests are the worst kind on each of those dimensions.

It's really not a good idea to follow TDD instead of using types. You want tests for things that aren't practical to verify, not for replacing static verification.


>Exactly, and all you need is Any

If you're using Any, then your program is (at least in part) dynamically typed. I don't think it's fair to say "look! Static typing is better than dynamic typing in any and all situations! (so long as it can use dynamic typing)"


Exactly :)


I first read this problem:

  micro-management, red tape, overly bureaucratic processes, and a lack of developer autonomy
and then this:

  essentially have no process. Everything is ad hoc, nothing is written down, deployment is a manual process, and there are meetings and interruptions all the time

They weird thing about enterprise is: I see both happen at the same time.

We have an issue tracker, but its a bad product and the metrics are heavily politicized. So we track our issues in excel. No way to know on whose excel your name appears.

We have a document sharing platform, but the fine grained access control makes it impossible to let the relevant people read the document. So we mail different versions around and forget to update the central truth.

We have architecture documents, but they have nothing to do with what runs in reality. We can't document what runs in reality because it would conflict with the offical architecture documents.

We have a release process that takes a week or a month, and many paper is written about risk and impact, test plans, deployment plans, resource availability, etc... But the actual risks must stay unspoken lest the release is cancelled. Tests were done on an environment that is forbidden to resemble production. People ignore the deployment plan and do whatever they like, and thank god because the deployment plan is unusable. People are not available when things go wrong, because the deployment plan booked them for 2 days for doing 5 minutes of work and they decided their part was done.

The worst problem: Instituting any real process is impossible, because officially there already is one.


I worked in architect roles for a while; I tried to make the architecture documents and systems match, it was impossible: I could not update the architecture documents to match what was in real life systems because on the first internal audit my skin would be on a stick, I cannot make the systems respect the architectural decisions because management not only allows, but is actively pushing for cutting corners for many, may reasons and they over-rank me or simply build systems not approved in non-compliant ways. The architect is that person you ask for confirmation if you think they will agree and ignore the rest of the time.

In the end everyone pretends there is no problem, but everyone knows there is one, including the auditors that selectively raise a bunch of small issues just to cover themselves with "we told you there are issues", but nothing big enough to trigger any alarms.

Non-IT companies do compromise a lot in IT; they don't want to compete for talent (because they are "not core"), they don't invest the right amount in IT (because they "don't need") and at the end of the day they cannot live without IT, so they cut corners with top management blessing. If there would be a SOX-equivalent for IT systems, most CIO/CTO would go straight to jail, don't collect 200.


This reminds me of something one of my CS professors said about when he worked in industry. They had a joke:

"We may work slowly, but our products are bad."

Nobody laughed, so he added, "You don't think that's funny today, but in five years you'll think it's hilarious."

Can confirm, I think it's hilarious.


Ooof, yeah, this rings familiar.

Or similar things, where actual risks may be spoken, even to power, and power will even thank you for raising them, and then proceed to ignore them because they're unpleasant and hard to address.


The best places are the ones that know about the risk matrix, and will insist you estimate probability and severity and place your risk there; and the matrix is capped at the usual 3x3 grid, obviously with a red square at the low-probability, disastrous severity square; and there's a rule that any risk on a red square is unaceptable and must be managed until it isn't.

All very reasonable rules, with the composed effect that you can never point a risk that inherently takes your system down (e.g. all of our ISPs go down), however rare it is.


Oh, definitely. I personally like the fact that there isn't something stronger than "risk", like "ongoing problem". I.e., "We're building a mobile product and have zero mobile devs". We're stumbling along and figuring it out as we go, so does it jeopardize the project? Well, no. Does it jeopardize timelines, and a whole bunch of non-functional requirements such as stability and security? Very, very much. But I can't quantify those.

Or "velocity is X per sprint, and the deadline you've set is 3 sprints, and we have 6X worth of stories". That's not a risk, that's a promise we'll not deliver on time. But then upper management just sees that risk and either just says "well, we need to hit it", adn expect you to magic a solution, or tries to solve it in the only way upper management knows how, by trying to throw more people at it, despite the fact they've all presumably heard of the Mythical Man Month.


This may be the worst thing i read on HN ever.


In what way? Language? Content? World view?


Most of us would consider that nightmare scenario to work in. The closer to startup life you are, the less that exists. Smaller organizations die if they have that kind of dysfunction.

When everything succeeds, there's no problem. When it fails but is salvageable, it's a giant fingerpointing exercise that makes solving the actual problems twice as long, but ultimately has no consequences for the processes or people.

When things truly fail, people are thrown under the bus because they "didn't follow process." Management is unscathed.

And each part of the process is logical as to how it was developed. Because once a system is large enough, it develops enough scars that say "we won't fail again in that way." - and all failures are because the process wasn't followed.

Then, IT is outsourced because it's too expensive. The outsourced system openly skirts much of the process, but can't get the information necessary to build the right product and is seen as a cost center. Eventually, with enough failures, the trust is degraded and the IT is brought back in house. The cycle repeats.

You wrote it very well.


The horror i felt imagining how the people who work in such an environment might feel, if they haven't checked out allready.

(Edit) So it isn't actually that dramatic.


Don't weep too much for our damned souls. It's actually not that bad as it sounds, if only because everyone knows the rules are insane. A lot of the time, some manager goes to the meetings or shuffles the papers for us. We do loose a lot of time and its sometimes tiring, but there is also, if you want, real and interesting work to do, with noticeable real world impact. It also allows me to mix work and children, which is sometimes hard in smaller orgs.


Glad to hear that, spending time with your kids is worth so much more IMO, the greatest joy i feel is when i make em laugh.


Trying to get out of it at the moment, but it’s seemingly very difficult.


You want to, and you will. Keep at it.


And yet, it's way too common...


This is how most non-tech Fortune 1000 companies run.


"After I read the book, I've come to understand that general-purpose static type system can never prove unequivocally that a generic program works. That's what Church, Turing, and Gödel proved."

Oh, not that again. If a program is close to undecidable, it's broken.

Microsoft ran into this with the Static Driver Verifier. If they couldn't prove that a driver could not blow up the kernel after the prover had cranked for a few minutes, the driver is rejected. Really, if you don't have easily proven decidabilty in your device driver, it has no business going into production.


I think it also dosen't really matter, in practice. While it would be nice to have a language where, if it compiles, its provably correct (for some definition of correct, defined by whatever the language states, I guess), we don't need that extreme to be useful.

I've personally found myself dissatisfied with dynamically typed languages after spending the bulk of the past ten years writing in them (Clojure and Python mainly), because the kinds of errors that I most commonly find in my own code are ones that even basic static types like in Java or even C++ can detect (basic type mismatches) while in dynamic languages, to catch these early, the unit tests need to have exercised that particular case. The problem with that is it puts the burden of thinking of and testing for these cases on me, but we all know that exhaustive tests are rare and hard to maintain. Property-based testing helps, but again, it depends on the data generated whether or not a type mismatch will get tested.

Statically typed languages prevent this type of bug, this makes them useful to me, even if they can't prove the absence of all bugs or can't prove that a program won't halt.


I'd like to see the industry move towards proof-carrying code. This takes decideability checking one step farther: not only does your driver have to be decideable, you have to prove that ahead of time and include a proof that can be checked in linear-time right along with the binary. At that point even signatures become redundant; you don't need to trust any central authority if you have proof of the programs validity right there.


You still need some criterion to verify against. The Static Driver Verifier is successful because its criterion is just "will not blither all over memory" or make an incorrect API call. Passing the check doesn't mean the driver will drive the device properly, but it shouldn't crash the kernel. Or offer buffer overflow exploits.


Yes, those exactly are criteria that PCC was developed in mind with: "examples of safety properties that are covered in this paper are memory safety and compliance with data access policies, resource usage bounds, and data abstraction boundaries" (from the abstract of [0]). That's what safety is, after all: that something bad does not happen.

[0] George C. Necula, Peter Lee. Safe, Untrusted Agents using Proof-Carrying Code. 1998. http://www.cs.cmu.edu/afs/cs/project/pop-10/member/petel/www...


Isn't Microsoft still trying to make a proven TLS implementation? If that takes really smart people years and years to do, I don't see much hope for the rest of us on line of business apps.


Yes, choosing the right criteria / spec to prove against is a big task, and one that is not done at all today. Mathematicians sometimes say that fully describing the problem can be more work than actually proving it with a particular solution, and I imagine there's some parallels between that and the criteria vs proof problem you mention here.

If there's a biggest problem it's that nobody cares about a good spec so nobody will pay for it.


For that to happen the industry needs to care more about quality, and that will only happen when liability becomes a real thing across the industry and not only in highly critical software.


> Really, if you don't have easily proven decidabilty in your device driver, it has no business going into production.

In other words:

"Every system is either so simple there are obviously no errors or so complex there are no obvious errors." ~ Hoare


Many of the author's heroes became prominent because it benefited them as consultants pushing their methodology. Those methodologies are designed (in part) to make it easier for younger, less experienced programmers to contribute to large software projects. I would say that it's okay to disagree with them over time as you gain experience and understanding.


Actually, I think this is the correct response. @everyone please ignore my response below/above/wherever.


That's interesting because better types systems would be a good way to do that too but most of these people usually push back against them. One theory could be that they want to work with existing code but if you take Robert C. Martin as an example, he often pushes Smalltalk and more recently Clojure (which is more pragmatic since it works with existing Java code). That's something I don't understand. Were the other languages unusable when these people became well known? Is this just their personal biases?


Agile methods are not designed for "younger, less experienced programmers". They work best for enabling successful, experienced hackers.


In practice, Agile is often Scrum, which in practice breaks down tasks into bite-sized blobs of functionality on a conveyor belt of work, to be implemented by a team of interchangeable human resources.


That would be Kanban... And in practice, many orgs are performing a pseudo-Scrum which is really just a timeboxed Kanban with a lot of cargo-culting of the Scrum ceremonies.


If you watch Robert C. ("Uncle Bob") Martin's talks, he spends a lot of time talking about how the first generation of programmers didn't need agile because they were already professionals from other fields. The agile methodologies were created because the industry decided to hire younger, less experienced, and less diligent, staff.


You contradict my uni SWE professor: my understanding was that waterfall was SWE’s almost-cargo-cult adaptation of engineering practices from other fields (civil eng, mech-eng, etc) which “worked” in many use-cases but was/is entirely inappropriate for continuous delivery, which is how I’d wager the majority of software gets delivered thesedays (as opposed to them all being “1.0” releases burned to a CD on the client’s desk, with no further changes made, ever).

Agile, XP, Scrums, Sprints, etc, do not represent a dumbing-down of engineering practices to accommodate coding-camp-types: it requires just as much discipline and understanding as the techniques of yore, but it overall works-better which makes doing our job easier. Do not equate “easier” with dumbing-down. And I’d wager a faangma summer intern SE today would run laps around a decade-burned SE from the 1980s - not least because they had the benefit of learning from the mistakes of the past 40 years.


A great post! Thanks. For me, it is the meta issue of how we think about the examples that is most interesting rather than the examples. One contribution I have is that we humans are biased toward action as we must be. And biased toward results. These are survival skills. If acting requires me to adopt some belief, I do so - even if that belief is incomplete or even false. And if acting on that belief produces some result it is grows stronger.

Long, long ago I had a friend who had an old car and not much much money. The car would often not start on the first try, so he would get out, open the hood and wrap some exposed wires in electrical tape, then get back in the car and it would usually start right up. I offered to replace the exposed wires. I was bewildered to find that the wires were not connected at either end! I had watched the process where wrapping the wires fixed the problems, yet one had to be insane to accept that as the cause. I finally understood that the float on the carburetor was defective (acting like the float in a toilet). The time delay in wrapping the wire allowed the gas in the flooded engine to evaporate enough to start. I replaced the float and that fixed the problem.

At least that is what I now tell myself, but perhaps my own belief system is as flawed as wrapping the wires? Perhaps it was not the float but a gasket or that it fixed an interaction between some unknown type of subatomic particles which in the future will be used for near instantaneous space travel?

So in answer to the question "Am I stuck in a local maximum" the answer is "always".

(edit for clarity)


I've worked with people who were TDD maximalists, Agile / Scrum maximalists, microservice maximalists, Gang of Four maximalists, and even a MongoDB maximalist.

What has become clear to me is that the silver bullet doesn't exist. Pick the tool for the job, and accept that exceptions exist for every rule. The maximalist is too rigid, and in the rigidity is arrogance and ultimately failure.

I would even say that if you take someone who is obsessed with something like TDD, and then ban all engineers from using TDD, it is quite possible you get a better product. You don't want TDD for every problem - sometimes it simply is a barrier to success.


> "To the far right, we have a hypothetical language with such a strong type system that, indeed, if it compiles, it works."

> For good measure, despite my failure to understand the implications of the halting problem, I'm otherwise happy with the article series Types + Properties = Software. You shouldn't consider this particular example a general condemnation of it. It's just an example of a mistake I made. This time, I'm aware of it, but there are bound to be plenty of other examples where I don't even realise it.

I don't see what the mistake is. Halting and Godel's incompleteness theorems only say there exists problems where you don't know if there's a solution or not - that's very different from saying you can't come up with solutions for specific problems.

For example, you can't tell if any arbitrary crazy program halts but most loops people code in practice have trivial proofs of halting because it's usually simple loops from 0 to the fixed length of a collection. If some code is full of loops where you're nervous about them halting a lot, there's probably something very wrong with the design that would get flagged in integration tests and during use too.

It's been demonstrated you can statically prove the correctness of something as complex as an operating system for example (see https://sel4.systems/About/) so the above isn't much of a barrier to proving interesting program properties:

    seL4's implementation is formally (mathematically) proven correct (bug-free) against its specification, has been proved to enforce strong security properties, and if configured correctly its operations have proven safe upper bounds on their worst-case execution times


Try going all-in for a year on the other viewpoint, assuming you've reached a local maximum on your favored approach. At worst you'll be slightly less efficient for a year. At best you'll be more efficient and come out of it with a whole new dimension in your skillset.


I wonder how much he'd hate his heroes if he tries something like that. Or maybe he'd decide it's not worth his time and money and stay stuck in his own bubble.


Just try to listen to the other viewpoint from someone competent, it may be enough.


Interesting questions, but I think one aspect is sorely missing from the analysis: personality.

The author sees himself as an introvert. He likes typed languages. He reads books about Turing, Church and Gödel. He seems to like understanding things. Sounds to me like he has a cognitive style where what Kahneman calls System 2 is dominant.

I’m not familiar with all his “heroes”, but I would assume most people become “heroes” because they are somewhat extroverted. They like talking to people. They like going to conferences. They like being in the limelight. With this often comes a different cognitive style, closer to what Kahneman calls System 1.

People are incredibly different when it comes to cognition. I think the answer to the question in the title is: Your thinking is based on a faulty premise. There simply is no single objective function.


What a brilliant comment, great insight


Nice to see a recommendation for Petzold's The Annotated Turing in the post, it's a great read.

I'm not sure I really understand the writer's concern though. The debates he mentions (functional versus object-oriented programming, dynamic versus static typing, oral versus written collaboration) don't seem like issues with right or wrong answers in and of themselves, but rather issues for which to compare and contrast benefits and tradeoffs according to one's goals, priorities, and preferences in a given situation. Disagreement with heroes in such a context seems a complete non-issue.

> "could I be stuck in a local maximum?"

I perhaps don't quite understand his meaning of the question, but as perfection is infinitely precise and necessarily unattainable, everyone (who's at least honestly doing their best) is always stuck at a personal local maximum, and will be until they find a new maximum through continued learning. And I think there'll always be enough variety of experiences and preferences among individuals that we'll never all completely converge to the same maximum.


> In a recent discussion, some of my heroes expressed the opinion that they don't need fancy functional-programming concepts and features to write good code. I'm sure that they don't.

It's OK to disagree with Bob Martin - many do. The days of TDD silver bullet training gold rush are over.


Not necessarily a Robert Martin fan, but he is writing Clojure (functional Lisp) as his primary language these days.


My favourite TDD question to presenters was to show how to design GUIs according to specific UI/UX criteria, while following the TDD religion.


> To the right, in this context, means more statically typed. While the notion is natural, the sentence is uninformed. When I wrote the article, I hadn't yet read Charles Petzold's excellent Annotated Turing. Although I had heard about the halting problem before reading the book, I hadn't internalised it. I wasn't able to draw inferences based on that labelled concept.

> After I read the book, I've come to understand that general-purpose static type system can never prove unequivocally that a generic program works. That's what Church, Turing, and Gödel proved.

This is not exactly the right inference. You can have a language that has the property "if it compiles, it works", that language just cannot also be Turing complete.


> This is not exactly the right inference. You can have a language that has the property "if it compiles, it works", that language just cannot also be Turing complete.

Not quite. It can be Turing complete as long as you fill in the proof steps the compiler can't infer for itself. Coq, Idris and other theorem provers can all produce programs with arbitrary looping behaviour given valid proofs are generated.


This seems wrong. If you require a proof that a program halts for all input (or of any halting equivalent property), then there will be programs you cannot write - those that possess the property in question but for which no proof exists, and those that actually lack the property.

In particular, you obviously cannot write a universal Turing machine in a system that requires a proof of halting for all input, because it by definition won't halt if it is fed a program that does not halt. A system that cannot build a UTM is not Turing complete.


I'm not sure what exactly is not right. Type-based theorem provers exist (Coq, Idris, Agda, etc.). They can be used to write Turing complete programs (eg. CompCert). If you encode the properties that need to be satisfied and the compiler successfully compiles your program, then those properties are satisfied.

I suspect you've simply misunderstood the claims that were made. In particular, the above does not entail that you can write any program. Some programs simply have no valid type.


I guess that I am claiming that using Coq, Idris, Agda, etc, (in a setting where we are actually using them to prove correctness properties that are halting equivalent), is itself a particular way of working in a language that is not Turing Complete.

As an aside, "Turing complete programs" seems like a category error?


I find the “hero” wording a little grating and I think it’s the problem here.

They’re just people expressing mostly subjective opinions.

The problem is developers put these people on a pedestal and take their word as gospel. In reality you need good judgement to understand and appreciate the trade offs. I do also think developers look for silver bullets and happy to accept anyone with some authority telling them it exists and they needn’t think for themselves.


article says: "I find myself disagreeing with my heroes on a regular basis, and that makes me uncomfortable."

Why should you agree with your heroes? You are not in the circumstances they were whenever they became your hero.


I found that no single "pure" paradigm is a best response to development needs. By pure I mean pure OOP, FP, TDD, etc.

Instead I am currently working on mixing various paradigms together to achieve better result.

Some examples:

-- I use OOP to flesh out the domain model of my application, but use FP for most of the infrastructure and application around it. Some processes within OOP model are implemented using FP. I use OOP for its strength in bringing meaning and clarity to the most important part of the model.

-- I use unit tests only for the important, non-trivial constraints on the tricky parts of the code. For the rest I prefer functional tests -- focusing on testing external application behavior lets me refactor the code easily without spending time on a huge mass of unit tests that need to be brought to shape after every refactoring. I still test the application -- the test suite ensures I will get notified if application misbehaves, but now it is not tied to actual internal implementation.


I don't have any heroes in the field of computer science (hero is too strong a word for me), but there are many experienced people I admire who express ideas or opinions I find appealing.

An example: I admire Niklaus Wirth (creator of Pascal, Modula-2, Oberon) and in particular his preference for strongly-types programming languages that are small in size. He's spent a lifetime in the computer science field, but he disagrees with the ideas of others - and others disagree with him. Wirth is critical of functional programming as a programming paradigm, for example.

Overall, can anyone argue that someone is emphatically right or wrong in this field? Everyone settles on a philosophy that shapes their outlook based on their experience and research. And that outlook might change over time. That accrued wisdom and experience (or inexperience!) will naturally cause someone to gravitate to one philosophy, or disagree with another.


> After I read the book, I've come to understand that general-purpose static type system can never prove unequivocally that a generic program works.

This sentence seems confused. What is the meaning of "works"? The most general interpretation seems to be "works" = "satisfies some proposition P".

But, "general-purpose static type system can never prove unequivocally that a generic program satisfies some proposition P" is false. In fact, static type systems unequivocally prove many propositions about general programs. The generality of the propositions depend on the expressive power of the type system.

If you dial static typing up to 11 (dependent types), then P can be literally any proposition about the program. This sometimes requires programmers to provide the right type annotations, but this is no different in principle from needing to program behaviour, ie. a computer needs you to write out the program describing what you want it to do, and a type checker needs you to write out what you want it to check.

You wouldn't expect to just wave your hand and your computer create the program you have in your mind for you? Or write out the kind of program you want in ambiguous, natural English and have the computer figure out exactly what you meant?


Setting aside the specific topic of typing, the article is a great read simply as an inspiration in analysis and introspection. A great example of a "healthy sense of doubt", applied internally and externally, pushing along learning. Good piece.


I think that is an important blog post.

My idea for approach is to discuss those things on forums on the internet - at work on coffee breaks. Just to challenge own beliefs and see other arguments.

While starting new project don't spend too much time discussing or blocking people. Because a lot of projects will just roll and arguing about starting project in functional paradigm is most likely just cost time and brings nothing. Best tool for the job most often is the tool that team knows best.


I have to disagree and risk getting labeled a member of the Rust Evangelism Strike Force. The service at NPM and that interview with that developer at Qovery paint a different picture.

(Edit) Bleacher Report replacing 150 Ruby on Rails instances by 8 Phoenix on Elixir instances would be another example.


Trying not to be totally contrarian :) so with following expressions I am not a smart ass, only setting up sentences in such way to convey what I think is hard problem in delivering software.

But good luck if you have 5 X(lang) developers to make that decision to go with Rust.

If you have to deliver v1 of a product and have limited budget, good luck hiring 5 brand new Rust developers.

What I learned as boy scout - first you have to have people that want to do the work - having loads of money can help but most of the time we don't have unlimited supply of money.

Having people who want to do the job beats everything.

People that are deep in tech X(or lang) rarely switch to something else and then you still have to take into account that they will write X in Rust which might not be the best way of doing things.

On the other hand if we assume spherical cows or unlimited budgets/unlimited time frames for a project - we can discuss getting X devs to write in Y language.


I really enjoyed this and felt it resonates quite a lot with my own views.

I think a lot about there not being a "right" or "wrong" approach, absolutely, rather approaches which work for some teams and individuals that don't succeed for others.

As well as being too easily distracted by shiny new tech toys, our industry's other big dysfunction is a ready insistence by "thought leaders" about how to do things right, the One True Way.


His heroes are the agile manifesto folks? Yeah, it is a good thing he disagrees with them. Bob Martin has been posting howlers for a long time. One of the problems with never formally studying CS is that you might never discover real computer scientists who could be your heroes. Who could disagree with Lamport, peyton jones, Hinton?


Agile is a tool for feature-dominated webcrap where "sort of works" is good enough. It's not useful for, say, database internals or hard real-time.

If you just need to put ads on a page, a deep knowledge of Knuth and Dijkstra will just make you unhappy with the mess that HTML/CSS/Javascript has become.


> Agile is a tool for feature-dominated webcrap where "sort of works" is good enough. It's not useful for, say, database internals or hard real-time.

Have to disagree there. The basic point of "agile" methods is to promote small-scale teams (with the fabled "two-pizza team" at the largest end) doing heavily-iterative development with an emphasis on refactoring and continuous delivery ("release early and often"). There's no reason why this could not work for these sorts of projects.


Sadly, this sort of software seems to dominate discussion nowadays.


> I don't have a formal degree in computer science

That ploeh does not have this is to me a big shock. It would not be a big stretch to say that every .NET systems architect worth their salt has at least stumbled upon his content once in their life.


What has one to do with the other though? Writing (and "architecting") good software is mainly craftmanship which is learned via experience, not something that requires a "formal degree". The years at university are probably a good place to get some of that experience, but "computer science" is mostly tangential to the actual problems of programming.


yeah, ik its kinda irrational, but he is a very academic and scientific type of guy.

oh, and for the record I dont have a degree either.


Or maybe you have just outgrown your heroes and it's THEY who're stuck in their local maximum.


This may have been incidental but I think I was just triggered for the first time in my life. The far right is now statically typed and dynamic typing is now the left?

/slight tang of sarcasm


Makes sense.

The right is usually the law-and-order side while the left is the live-and-let-live hippies. :-)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: