I agree with this article but I have to admit I'm tired of learning programming languages after a number of years in this industry. I've had gigs in just about everything and now I'm purposely restricting my efforts to just a couple different languages even if there is money on the table for work in something else.
You can do just about everything in any mature language - after a while of so to say exploring the landscape for a number of years, its nice to just become deeply fluent in a couple of things. I know people love to talk about this language vs that and how language X will make you rethink everything and usually there is some truth to it but in the big picture its usually pretty trivial in terms of actually getting stuff done.
I agree with you here. Several languages (let's call them "advanced"?) are better than other, ok, but actually very few introduce actually "innovative" constructs. Most of the perceived differences in recent languages are actually in the design of the libraries built around the core language, which is underwhelming.
After some years learning "advanced" languages, and having mastered the constructs, the biggest problem becomes becoming acquainted with the libraries. It takes time to re-learn what to use to do something, not how to do it.
Another problem of mine is that a lot of time is spent writing/using bindings to other foreign code, or let alone re-implement the code in the current language. Or see your old code stop working because of new libraries, compilers, etc.
In the end, it feels like you're writing just for the sake of writing instead of actually doing something.
>In academic research and in entrepreneurship, you need to multiply your effectiveness as a programmer, and since you (probably) won't be working with an entrenched code base, you are free to use whatever language best suits the task at hand.
I disagree. If you're a startup, you're better off using a more mainstream language. I'm from India and if I start to find a good Haskell or Scala programmer I'll be looking for a long time and I'll be paying him/her a lot for those skills. Being a startup, you start small and its tempting to start using the bleeding edge technologies, but you must do a cost-benefit analysis; coders for PHP and Java are much easier to find and since the supply is more, the cost of skills will be less.
Bootstrapped startups should particularly be wary of using technologies that have a very small talent pool in their part of the world. Use new/rare technologies if the benefits of using them outweigh the hassle and expenses of maintaining and extending the team/codebase for it.
> I'm from India and if I start to find a good Haskell or Scala programmer
In general, if you need to find a good programmer for any language (Python, JavaScript, Java, ...), you will be searching for a long time. The only difference is that with Haskell, OCaml and other advanced languages, usually the first programmer you will find is already very good.
However finding that first 'good' programmer is going to be harder for languages like Scala, Haskell etc. compared to finding the first good programmer on more popular languages
No it isn't, that's the point. It is easier to find any programmer. It is not easier to find a good programmer. 99% of java programmers are bad. 50% of scala programmers are bad. Finding the 1% of good java programmers is just as hard as finding the 50% of good scala programmers.
I disagree (again). If you are a technology focused startup, chances are you are looking for highly talented people for your core team. You usually don't want the average PHP or Java Joe; you want more! And those you are looking for are usually not afraid to learn learn yet another language, if they don't already know.
On the other hand if you are not building a technology focused startup, but simply need some software as a vehicle to drive the main business, you are right to look at the cost-benefit analysis and choose a more mainstream language, for which you can find cheap and easily replaceable labor in your area.
I'm not saying you can't get the job done in a mainstream language; but I postulate that the mindset and attitude towards code of a good Haskell or Scala programmer is vastly different from your less expensive PHP or Java programmers. Some want to push the limits of what's technologically possible and others just want their paycheck by the end of the month; they are in both camps; are they evenly distributed?
Is it really so hard to take existing talent and train them to use other languages? Especially in cases where the existing library and tooling ecosystems exist (e.g. anything based on the JVM) the actual language probably won't be a show stopper for the level of talent you need. If a new language takes a couple weeks to get up to speed in, so what? The bigger problem, IMO, is the existence of libraries and basic development tools like build systems and dependency management. I'm fairly pleased with how many of the newer languages that build on the JVM allow easy interop so that you can use the existing ecosystem when it makes sense.
>Is it really so hard to take existing talent and train them to use other languages?
This will go into the costs column, and the reasons you use that particular tech. stack will go into benefit column. Weights and values are fairly subjective for each of the rows in the two columns and is very case specific.
What I wanted to point out was that there are a lot of factors to consider while choosing your tech stack, be it for academia, research, startup or enterprise. I disagree with this following statement:
>, and since you (probably) won't be working with an entrenched code base, you are free to use whatever language best suits the task at hand.
Size of the codebase is not the only criterion. (And to be fair to the author of this article, "task at hand" envelopes a lot of the other criteria).
PS: I found a good debate between Ryan Allen[1] and Michael Wales[2] debating the criteria on which to choose PHP or Ruby, a lot of them can be translated to a general tech stack choice. I've submitted it to HN here[3] and direct link here [4].
I cant comment on the specific case for your startup but given the wide adoption of PHP/Java/Ruby/Python compared to Haskell/Scala, it comes to a simple eq.
No. of programmers for the former > No. of programmers for the latter
But as someone in the thread already pointed out, definition of 'good' programmer is also subjective and different in each of the languages, so that will fuzz the simplicity of the above equation.
PS: I'm very impressed with tech community in Pune in general. Very vibrant, diverse and passionate.
Number of programmers is often not proportionate to the number of good programmers.
By your simple equation, you'd be hiring only Turbo C programmers :D
But I do agree with you - if your startup is not primarily a technology one, then maybe it doesn't matter. Throw enough people at it and it might just work
My biggest desire to learn a language like Haskell or Scala would be the total shift in thinking required to grasp the language (the author mentions this). I've started playing with clojure lately and even though I've barely scratched the surface (primarily poking around 4clojure.com) I could already feel my brain twisting in ways I've never experienced before. I firmly believe once you turn that corner and really start to get it, you start seeing and attacking problems in such a different light that there's no going back. The big problem I've had recently is finding the motivation to learn anything outside of work (programming/engineering related at least); once you start getting paid to do something its really hard to do it for free, haha.
"In academic research and in entrepreneurship, you need to multiply your effectiveness as a programmer, and since you (probably) won't be working with an entrenched code base, you are free to use whatever language best suits the task at hand."
What about libraries and code-bases available to the programmer? Legacy code might not be a concern, but what about having to reinvent wheels?
Any advanced language worth his salt will allow you to link libraries from other languages. It may require some work, but at least it can be done; whilst you can't expand the capabilities of your programming language of choice as easily.
I think no language is advanced enough to be used alone for everything. Steve Yeggie said that once a time at Amazon the only languages allowed were Lisp and C. It makes sense: an high level language plus a low level one. Those languages which try to encompass too much become too complex and hard to use effectively (think about C++).
This is true, but there's a lot to be said for native libraries which can be used more naturally. More important for things like frameworks, etc. than algorithms / highly technical code. e.g. using the Python wrapper for OpenCV is fine -- a little bit jenky but it works, and I definitely don't want to write those myself if I'm focusing on a product. On the other hand, would you want to use a wrapper for a web framework written in C? Probably not.
Yegge wrote that in "Tour de Babel". I don't think it is really true tho.
When I started in 1997 (before Yegge) there was lots of C and Perl but no Lisp, at least not in the central repo. There were restrictions on Perl. The DBAs wouldn't let us use it to insert data in the database. Shel didn't care much for Perl. He might have warmed to it a little bit when he discovered it supported closures; I remember him sending a msg to the software alias about it.
Eric Benson had been a principal at Lucid, though I didn't know it at the time. Once he gave a class to the perlhackers on how to write code. The advice, as I recall, was to put "use strict;" at the top of our scripts and to start every function by shifting all the arguments into local variables. Eric Benson wrote the initial version of the "customer who bought this also bought..." code, and he used Perl, not Lisp. I'm sure he thought about the problem in Lisp terms, tho.
I read him religiously but I don't remember him even coming close to making such a claim and searching for "yegge amazon lisp" doesn't turn up anything.
So the article first states a mathematical definition:
> A bounded lattice is a mathematical structure that has a least element (bot), a greatest element (top)
And then proceeds to define a typeclass that does not satisfy the definition:
instance (Ord k, Lattice a) => Lattice (Map k a)
where top = error $ "Cannot be represented.
:-(
You can't even fix the definition of top, since union of arbitrary Maps is not bounded, e.g. there is no finite Map String foo (map with String keys) that is a superset of every Map String foo.
You could change Lattice to a weaker structure, but I think it would be better to present a simpler typeclass like Show or Ord.
All the languages he suggests apart from Scala are or are pretty close to being pure functional languages.
I'm not convinced that I would be more productive developing say a web app , database app or game using say Haskell rather than Java or Python especially since a large amount of programming is all about state and side effects.
I'm not against learning a pure functional language as a way to improve your programming especially for concurrency or to solve specific problems.
Are there any large pieces of software outside of academia that are written primarily in functional languages?
"I'm not convinced that I would be more productive developing say a web app , database app or game using say Haskell rather than Java or Python especially since a large amount of programming is all about state and side effects."
Haskell isn't about avoiding side effects. It's about managing them.
Seriously, it's time to drop this criticism. Obviously Haskell can "do things". It may not have thousands and thousands of apps of every type, but when it has an X window manager, a DVCS, and a few web frameworks under its belt, it clearly isn't about "not having side effects". (Nor is that a complete list; my point is that is sufficient evidence to show that things can be done, not a complete app list.)
Every functional language you've ever heard of has reasonably serious apps, but I'm not sure there's one yet where you won't encounter at least one of 1. a tooling problem (Erlang, Haskell) or 2. an impedance mismatch between the tooling and the language (Scala, Clojure). Time is correcting this problem rapidly, but if you don't feel like being on the cutting edge (and that's a fine position to take), I wouldn't push them on you. (But I would say that all four of those languages I mentioned are indeed the cutting edge and have come off the bleeding edge. Real work can be done in them if you stick to their strengths, but you may hit some bad error messages, or road blocks you'd normally just download your way around.)
"...especially since a large amount of programming is all
about state and side effects."
Haskell provides lots of tools for reasoning about and managing side effects; it just doesn't (and, indeed, can't) allow them arbitrarily. In terms of interacting with the outside world—e.g. doing GUI work or interacting with a database—you usually have an effectful 'core' that manages side effects while a slew of pure helper functions operate on the data derived from those side effects. (This is good style in non-functional languages, as well, as it is a kind of separation of concerns.) Other functional languages are far less stringent about side effects, so both the ML family and Scheme allow state modification and side effects, although the functional style is preferred when possible.
In terms of writing algorithms, you'd be amazed at how little needs to be expressed in terms of manipulation of state. It feels natural to write stateful functions if you've spent large amounts of time programming in stateful languages, but that has more to do with your familiarity than a property inherent in the problem. If you were immersed in Scheme, you might say, "...a large amount of programming is about function definition and application", just like if you were immersed in Forth, you might say, "...a large amount of programming is about stack manipulation." You certainly could use e.g. Scheme as language for general-purpose programming, and I suspect the problems that would arise would not be problems with functional programming, but rather an unfortunate lack of modern libraries for things like Unicode handling.
I agree that you can vastly reduce state is a large number of cases. It just seems strange to write in a purely functional language when there are many languages out there that combine functional style with the ability to write in an imperative way.
I can quite happily write programs in python without any real side effects, indeed as you suggest I usually try and divide my functions between those which have side effects and those which don't. So what advantage would I get for picking haskell instead? Other than marginally shorter programs?
FP seems to be picking up steam recently, is this just because we are doing more computation in the cloud and CPUs are getting more cores rather than more raw clock speed or am I missing something?
The advantage is that you can reason about the correctness of code. Sure, if you write an entire code base depending on its size you may be able to reason about all the side effects in your head. But when moving to a larger project, or one developed by a team - all bets are off. It is really important to have the effects embedded in the type system. Even with a large test suite.
On top of this, having professionally written C/C++ for years in the embedded systems space I can say that Monads for IO is not awkward. In fact, it feels far more powerful because you are not limited to the awful semicolon operator for chaining effects.
Lastly FP != slower (even on single core). I agree Haskell can be hard to reason about performance wise at times due to choosing lazy by default. However, have you ever heard of Sisal? Many strict by default FP languages are quite fast, however they sacrifice purity. But to be honest, with experience Haskell can be damn fast also.
Interesting, my question is more though why now?
Since LISP has been around for a very long time and FP has had very little mainstream interest until the last year or 2.
Additionally, I think that Haskell really has to offer something over most other FP languages. Many features of dynamic impure FP languages have already been ported to languages such as Ruby or Python. What Haskell provides over, say Lisp, is exceptionally strong typing and isolation of side-effects. This is a quality that cannot be ported to mainstream languages easily.
So, why didn't Haskell become more popular earlier? I think that's simple: the GHC compilers has improved tremendously over the past years, Hackage has grown considerably, and there are now more books oriented at practical programming.
After spending some time exploring different functional languages, if FP does make it at all - my money would be on Haskell. There is something to be said for focused languages that does one thing but does it really well, and the feeling of security when a certain style is enforced over a codebase.
I probably wouldn't use it for anything today, but in another couple years maybe. The ecosystem seems like it's reaching critical mass.
Actually most Lisps offer optional means of static verification and static typing for when its necessary. Furthermore, Lisps have vital metaprogramming features that Haskell lacks which are essential for symbolic AI applications.
The other day i ran a blog about haskell at Standard Chartered (written in, um, Hungarian?) thru google translate. Very entertaining! But SC is probably becoming a substantial codebase
I think you may argue that some languages are in practice more or less pure by how much they encourage/enable avoiding mutability and side effects but of the 4/5 languages he discusses I'm pretty sure only haskell is purely functional. Scala Scheme and SML/ocaml are all impure functional that allow mutability/side effects. (Scala is considered an object functional hybrid by some and an object oriented language dressed as functional by others - try to avoid flamewar).
Haskell has pandoc(http://johnmacfarlane.net/pandoc/) a cmdline tool to convert beetween markup formats(also a library with binding to several languages) I think it supports a wider range of formats/fidelity then alternatives.
Ocaml:
MLDonkey is an open source eDonkey2000 client
Coq – a interactive theorem prover.
I've written public web applications in Ur/Web (a functional language for web development, http://www.impredicative.com/ur/) and the backend for one in ATS (http://www.ats-lang.org). I've found them both pretty productive.
I wouldn't say FP is big in finance. On the other hand, financial companies are probably over-represented among the industrial users of FP.
There are several prop trading firms (Jane Street, Tsuru Capital, etc) that use functional languages heavily. Also, many of the big banks (Credit Suisse, Deutsche Bank, Barclays, etc) use some FP in production, but as far as I know it's only used by small, strategic groups. FWIW I've interviewed or applied at many of those big banks in Tokyo and not one of them ever mentioned interest in FP skills. As far as I could gather the typical developer there doesn't know or is not interested in FP.
I don't know how far over to the functional paradigm arc leans but hacker news is programed in it. I know it's built off common lisp so there is that linage.
The naughty dog game studio used a form of lisp in their games starting with Crash Bandicoot until they got purchased by Sony(I think). Though, the language was much more imperative in nature instead of functional.
The dreaded "Unknown or expired link" error if you flip through a few pages. Thanks heaven there is hckrnews.com.
Merging discussions is not allowed.
No checking of duplicate links, if I'm not mistaken.
Non foldable threads.
No way to watch threads of comments.
No notifications when someone answers a comment of yours.
After you have submitted a comment, it just reloads the page instead of taking you right were your fresh comment is, to allow you to continue reading where you left.
Item lists in comments are not handled.
How do you format text in comments? Nobody tells you.
Basically, if you take HN at face value, it does not look like a smart piece of software. Maybe under the hood there is a mind-bending piece of AI, but as long as it doesn't care about users, users will never know about it.
The thing was written to give a linear history(if I'm not mistaken) so that when you click next page it's what the next page would have given you at the time you first loaded the page. He could just do a page=4 query string and not use continuations and make the pages the same for everyone with the risk of seeing duplicates.
It will vote for the original if the urls are the same when you submit it. This is a simple check though.
Foldable threads are client side.
I wish it had notifications.
The other items re formatting could be easily fixed on the submission page.
I wonder if the lack of state is why I keep getting the "Unknown or expired link" message all the time?
I see allot of functional DSLs (Excel for example + autocad as you pointed out).
Was crash bandicoot written in Lisp or was this just an internal DSL used for certain operations?
I don't have anything against functional languages and I'm sure all good developers should know at least 1 but the article seems to suggest that everyone should use them all the time.
There are more important things than having an expressive language, I use allot of Java which is seriously non expressive in many ways but I have more confidence in it to solve the category of problems I deal with than any other language that I have tried thus far.
Naughty Dog used an internally developed programming language for the whole game (or at least most of it.) A big part of the appeal was apparently the dynamic nature of Lisps, which allowed reloading of code at runtime to simplify debugging. This article is a postmortem about Jak and Daxter; the next page also mentions the drawbacks of using GOAL, as well:
You get that message fairly often because the implementation uses continuations (http://en.wikipedia.org/wiki/Continuation). This is one functional style for making a stateless protocol (HTTP) into a stateful one. HN could use a different design (one with less state) and avoid these issues with a few different trade-offs. This is an implementation detail rather than something specific to FP vs. non-FP. Also, best I can tell, HN still stores all of its data in flat files rather than a traditional datastore.
I appreciate your interesting comments, but just FYI, it's "a lot" not "allot". I wouldn't normally point out a spelling error but I noticed it enough to be distracting and thought you might not know (in case of ESL).
since a large amount of programming is all about state and side effects
IANA web developer, but...
Doesn't web development typically put state into a database system instead of the scripts the server runs to generate the requested pages? It seems reasonable enough to have a function that takes a page request as input, calls some other functions to look up info from the database, then constructs and returns a page and a sequence of changes to make to the database.
You can do allot with a database sure but sometimes you want multiple transactions within a single request and data will change between them. You may also take input from other sources than a database such as flat files, web APIs emails etc and need to do a bunch of processing between them.
Not to mention if you are using Comet or something where you hold a request open for a long time and push and pull allot of data some of which may not be persisted immediately.
Sure you can probably write your controller logic in a pretty functional way but then you probably end up moving your messy logic into stored procs or something else.
sometimes you want multiple transactions within a single request and data will change between them
Is this an issue with race conditions, or with alternating between reading and writing the state during a single request, or both, or something else entirely?
You may also take input from other sources than a database
This doesn't seem like it should make a difference: you write a function to get the necessary data from whatever source, then call that function in the page request handler (OK, yes, I/O is technically a side effect).
I agree that the processing you do with that data might require some significant rethinking, depending on how it was written/planned out before (sure, state-passing style is always there, though I don't enjoy doing it... really though, I've found that a lot of what I was used to doing imperatively is not all that awkward to do functionally).
Not to mention if you are using Comet or something where you hold a request open for a long time and push and pull allot of data some of which may not be persisted immediately.
What's the conventional way to do this in an imperative language?
Is this an issue with race conditions, or with alternating between reading and writing the state during a single request, or both, or something else entirely?
Usually reading and then writing some state followed by waiting briefly to see if another thread will change the same value, if it does use then do something with the new value and return that to the request.
I agree that the processing you do with that data might require some significant rethinking, depending on how it was written/planned out before (sure, state-passing style is always there, though I don't enjoy doing it... really though, I've found that a lot of what I was used to doing imperatively is not all that awkward to do functionally).
Depends on how much more difficult it is , does it just require and adjustment of thinking to a new paradigm or is doing everything harder? I'm thinking of stuff like Haskell Monads here. I admit I have not much functional knowledge beyond some basic scheme so I'm curious about this.
What's the conventional way to do this in an imperative language?
I guess allocate stuff on the heap which other threads can modify, use monitors to control concurrent access.
does it just require and adjustment of thinking to a new paradigm or is doing everything harder?
Depends on how restrictive your language is. Scheme makes it a lot of things easy, e.g. a sequence of statements is often easy to express with begin/let/let* , whereas Haskell is pickier about use of side effects. The biggest hurdle I had in learning scheme was getting used to using a recursive function where I would normally use a loop.
(let* [(x (... DB query ...))
(y (... API query based on x ...))]
(begin (... write some state based on x and y ...)
(... wait a bit ...)
(... read the same piece of state ...)
(if (... it is unchanged ...)
(... then ...)
(... else ...))))
The line between imperative and functional starts to look a bit fuzzy, but let* and begin can be constructed from lambda expressions.
Haskell is actually awesome for something like Comet or just about anything where concurrency is involved.
I'm launching a small Haskell-based product called EventSource HQ (http://www.eventsourcehq.com) taking advantage of just this.
Handling state across threads in Haskell is not a problem at all. Haskell is not bad or complex when it comes to mutable state, it's just very explicit about it. When it comes to doing stuff the involves multiple threads this is a huge advantage.
Haskell is complex in the sense that it brings a whole bunch of new abstractions to the table that can take a lot of work to get a hold of. But while this can make for a steep learning curve, it also comes with a huge payoff, since these abstractions are very powerful.
Stuff like Iteratee's and Enumerators or the newer Conduits can be tough to wrap your head around, but once you do the power of composable abstractions over IO streams can be extremely useful.
We use Erlang to run our (several million messages a day) WebSockets implementation. It's been fine, way more stable and less resource intensive than the equivalent Tornado server.
What exactly makes these languages "advanced" (as opposed to "complicated" or "difficult")? Or does "advanced" here mean "harder" as in "advanced math"?
Programming language research is a very large and deep field, though a lot of industrial languages are fairly divorced from it. All of these languages have a strong theoretical foundation. This pays off well. One common statement from ML programmers is "If it compiles, then it works."
>Programming language research is a very large and deep field
I agree with you. But to play devils advocate, if its so large and deep, why do I only hear about Haskell and Scala when language-geeks talk about "advanced" programming languages? Looking at the wikipedia link you provide, there's a list of bullet points by decade: 7 things worthy of a bullet in the 60's, 7 in the 70's, 4 in the 80's, 2 in the 90's and 0 in the 00's and 10's. Has nothing interesting happened lately? Is Haskell as good as it gets?
It's debatable just how good Haskell and similar languages are. Like any language, it has its ardent fans and bitter detractors.
As for development of "advanced programming languages" in general, if you are interested in what's out there, I strongly recommend you browse the archives of the "Lambda the Ultimate" weblog:
There are a lot of very knowledgeable language designers on there, and news of advanced language features and new languages filters through there all the time. The blog is not as active as it once was, but the archives are still very much worth studying.
Another great resource is the old c2 wiki. Here are some relevant starting points:
Finally, just hang around HN for a while, and you'll see plenty of posts discussing new "advanced" languages. The HN archives are also worth browsing through.
No, it is not. The deeper I get into Haskell, the more references I find to esoteric research languages like Agda (which itself is still pretty big compared to many others). The thing about Haskell is that it is probably the most "researchy" language that is also practical to use for general purpose programming.
Really? Haskell and Scala are sooo last century. Look up Agda or Epigram. ;-)
Seriously, though, it takes time. Lisp had higher-order functions and garbage collection long before they became mainstream. The type systems used in OCaml and Haskell were discovered in the 70s. Maybe in 10 years we'll see what of the 90's ideas become really valuable (or crappy but mainstream...).
You can do just about everything in any mature language - after a while of so to say exploring the landscape for a number of years, its nice to just become deeply fluent in a couple of things. I know people love to talk about this language vs that and how language X will make you rethink everything and usually there is some truth to it but in the big picture its usually pretty trivial in terms of actually getting stuff done.