With the exception of pandoc, I don't recall seeing anything particularly impressive from the Haskell camp.
(Btw, this, actual stuff people use, is how I measure programming languages -- this metric takes into account communities, libraries, practical issues, etc all together, and ties them with practical results. It's my version of "let the market decide").
Darcs used to be the poster child for Haskell but then somebody came along and wrote a much better DVCS in C, of all languages. Draw your own conclusions...
Still use darcs to this day. Darcs is magical AND productive. I use a script to play back my darcs commits into a git repo so i can put my projects on github. I love you darcs, I love you.
You may want to check out Warp [1]. Another quite popular haskell project you may have heard about is this compiler some people use... is called ghc :) [2]
There are a few startups out there who use it as their core tech. It's a sweet spot because you don't have any legacy code to support and it allows you to slap together fairly stable code fairly fast, at least once you get to know the basic tools. E.g. picking up Yesod (which, to be fair, isn't something you'll do in a couple hours) gives you all of the niceties and rapid prototyping of a tool like Rails, plus the obsession with type safety that gives you that handy line of technical credit.
I can't imagine ever using Haskell for a startup. Unlike pg's python paradox, the only programmers I could imagine applying would be programming language theorists, toy programmers, and people who think they are productive (because arrows!), without ever having built anything ever. I would only do it if it meant I could hire John Macfarlane.
We've actually had a pretty good time hiring for both Haskell and Clojure, it attracts a certain type of developer that a team like mine likes to work with.
There are certainly applicants from the groups you mentioned, and we do our best to filter them out.
It's interesting to hire people for stacks that most programmers have no production experience with (and have pretty much no way of getting), but it's certainly been done before.
In my view the tech in a startup pretty much doesn't matter. You either make something someone will pay for, or you don't, and then you die whether you use Fortran or Coq. Thus you might as well make yourself comfortable for the ride and use whatever you'll enjoy building stuff in, something you won't be easily bored of using and teaching others.
If you get to the stage where you need to quickly bring up to speed hundreds of developers, you pretty much already made it and you're experiencing growing pains, that's a good problem to have. Most of your code won't survive that scale without a serious rewrite anyway. You'll deal with that when you get there. The vast majority of people will never get that far.
We use Haskell heavily at my startup and it's definitely a competitive edge and well suited to the many general programming problems we solve with it (amqp processing client, rest api, javascript heavy webapp w/ haskell web framework backing, db orm modelling, scrubbing and feeding data into influxdb, the list is long).
I know quite a few professional programmers who credit Haskell with making programming fun for them again (this holds true for me too and my production skill set includes php, c, c++, python, ruby, erlang, scheme, javascript, scala).
Yeah, I've got a similar impression. The only 2 things I care about are 1) productivity (time to finish a task) 2) how pleasurable is it to use a language.
And I just don't find Haskell (or Lisp) to be as productive as many would suggest, even if you adjust for matureness of the ecosystem. (Note: I have a moderate experience with Lisp, very little experience with Haskell and these days mostly program in Java, Julia and Python.)
I wrote the core of our system in Clojure a couple of years ago, which is pretty high in productivity and actually quite pleasant to work with. Unfortunately as the scale goes up (hundreds of web application routes, sharing code among multiple projects etc) it doesn't feel nearly as nice anymore, and having static typing turns out to be pretty handy.
Haskell to me felt exactly like what you described: I could get stuff done fast because of how few head-scratchers I'd experience. This is mostly due to type mismatches, and how much more well thought our my design needed to be upfront. I also really enjoyed the language because I could express fairly complex thoughts in a very succinct, and yet very readable fashion, mostly thanks to types being explicit and enforced at compile time. Code reuse and sharing across multiple applications is also a breeze, which is absolutely key once you get past the "single Rails app" stage and you start getting into two digits worth of tools, services and applications.
Oh ok, I can be wrong about Haskell. BTW, I really like Ceylon, it's very statically safe (more than Haskell I'd say) and very pragmatic (unlike Haskell IMHO, it doesn't seem to focus on maximizing productivity).
Why? You assume that little experience with Haskell is not enough to make conclusions about its productivity. I disagree with this assumption, I think you can sometimes make a moderately confident conclusions from little experience. And I understand I can be wrong of course.
The point of graphs like these (aside from making a joke) is to describe the learning curve. These graphs admit that the beginning of haskell is a nightmare, but claim that eventually you have "unbounded" productivity.
It seems rather hard to evaluate the tail end from "little experience".
It's an incorrect conclusion that requires further education and experience before you can be sure your subjective experience of productivity truly is that poor.
please, this is getting old. Haskell is one of those languages for people who want to show off how clever they are instead of just getting on with developing applications that actually do useful things efficiently.
I hope you are trolling. Most Haskeller's value getting things done over being clever and actually actively avoid being clever. In your other comment you point out that many of PHP detractors have not used it or have very little experience.
Do you have experience using haskell? If not, you are being hypocritical. Please stop.
>Haskell is one of those languages for people who want to show off how clever they are
Yes, that's what banks are most known for. Showing off how clever they are and not doing anything useful. Facebook and google certainly fit that profile as well right?
I am suggesting that your confidence is misplaced. Having little experience in something means you should have little confidence in your knowledge of that thing.
"The term verbal contract is sometimes incorrectly used as a synonym for oral contract. However, a verbal contract is one that is agreed to using words, either written or spoken, as opposed to an implied contract."
It's funny how GP is heavily downvoted even though he's right. Downvoters (which I assume you're one of) didn't even check their facts.
Contrary to common wisdom, an informal exchange of promises can still be binding and legally as valid as a written contract. A spoken contract is often called an "oral contract", not a "verbal contract." A verbal contract is simply a contract that uses words. All oral contracts and written contracts are verbal contracts. Contracts that are created without the use of words are called "non-verbal, non-oral contracts" or "a contract implied by the acts of the parties."
So, that's clearly saying that people use "verbal contract" to mean "spoken contract" - it's not an example of someone using the term "verbal contract" to mean "written contract".
Since you're the one claiming "verbal" equals "oral", you're the one who's supposed to do the research. Wikipedia has a citation on that specific paragraph and there are lots of results in Google. Do your homework before your claims.
But anyways, "verbum, verbi" means "word" in Latin while "os, oris" means "mouth". That should be a clue.
"Verbal contract" always refers to spoken contracts!
This is about usage, not definition. To interrupt a hrwad with a pointless aside in a snarky manner about a common (mis)use of "verbal" deserves a downvote.
Your inability to find an example of someone using "verbal contract" to mean "written contract" has been noted. :p
And again, they carefully use written to mean written and verbal to mean spoken.
The first two pages of my Google search failed to show anyone using "verbal contract" to mean "written contract" - and it's pretty obvious why. A written contract is just a contract, or if you really need to specify whether it's written or spoken you'd be obtuse to use the word "verbal" to describe a written contract.
1) Write a script to answer requests without ever intending to pick up.
2) Get out of the car, don't transfer the other half.
3) Sign up as a driver, and wait until you find an attractive/rich fare. Lock doors and do what you will.
Deregulation is not the answer for everything. Uber/Lyft works precisely because they are very well regulated to stop the cases I outline above, and more.
"you require each user to use a valid credit card"
Who is "you"? Someone, somewhere would be creating a system that verifies unique identities, aggregates ratings, etc. Even if the logic is peer-to-peer, the system requires an extremely robust engineering infrastructure to keep it going, and the people that build that thing take on all the issues Uber is dealing with, but without real control over the system.
Or to put it differently, it's not a coincidence that all big peer-to-peer systems are anchored in weird spots around the world. In any major country the government will hunt down the keepers of the system and hold them accountable for the content.
So, to answer your question directly, you're missing a whole lot.
Fair, but I think if Uber truly does remove the regulatory hurdle, than their high-overhead model is not necessarily going to flourish, unless they 'kick the ladder out from behind them' with Uber-specific (ha!) regulatory exceptions. Is the software that handles all these issues really worth XYZ% of the revenue, or could it be done leaner?
My issue with the watch is the crown control. It just feels lazy to me to take a control mechanism made 100+ years ago for winding mechanical watches off your wrist, and repurpose it for digital control of a watch on your wrist.
Is it possible that the best possible UX solution for winding a mechanical watch and controlling a digital OS is exactly the same? Perhaps. But that seems improbable to me. It's hard to know until the thing is out in the wild, but I would expect a lot of people fiddling awkwardly with the top half of that tiny little dial as the bottom of the dial digs into their wrist. Doesn't seem terribly fun.
Or to look at it differently, both of Apple's other consumer hits (iPod, iPhone) introduced a navigation interface that was completely novel and way better than anything else on the market (iPhone => finger navigated multi-touch screen, iPod => rotary dial). A crown on a watch is definitely not novel, and I'm thoroughly skeptical it will be way better than its competition.
That being said, it's unlikely that this thing bombs. But as a test of innovation post-Steve, I'm just not seeing it. And over time, the luster of Apple will fade if there's no innovation.
The control mechanism that's lasted over 100 years obviously works well. We are not too far into the touchscreen era, and when the thing is only 1 inch on each side the touchscreen isn't going to work especially well.
Someone made a mockup a week or two ago that used the ring around normal watch face as an input mechanism. I actually thought that was kind of a neat idea. I'd kind of like to see one of the Android watchmakers give it a try. But it was more of a 'watch with some interaction' (like the old Timex Datalink) than a 'smartwatch'.
Crown mechanisms worked well for a completely different purpose. They are the easiest way to set and wind a watch that is not on your wrist. Totally different use case than controlling the watch's screen while you're wearing it!
Apple could easily do that for a round "Apple Watch 2". It's the same as the digital crown in function, just in a different place. The just have to make the bezel touch sensitive. The feel of using it would be very reminiscent of the iPod scroll wheel.
Ventura Watches[1] uses a similar mechanism to configure their watch. It is called Easyskroll (TM) and was patented in 2002 under 2002CH-1962 [2] so Apple may have a problem here.
With about 100 years of prior art for configuring the settings on a mechanical watch I don't think that patent should ever have been granted, nor do I think it stands much chance of survival once they try to get money out of Apple.
I'm with you here. You could essentially build from scratch a traffic setup for autonomous vehicles, instead of retrofitting the autonomous vehicles to a current driving system. And by flying you get to avoid lots of complications that will prove very difficult to work around. No pedestrians, snow, signage, construction, etc etc.
On the other hand, traffic in the air might become a very messy problem. Last I heard autopilots are not used during takeoff and landing, so they barely deal with any traffic at all...
You just described the problem nicely. The "magic" that turns 3 lane roads into 2 lanes etc, is a situational awareness that is really, really difficult to impart on a learning system. The big problem is that probabilistic models don't have a notion of "common sense" solution to an odd situation. They need to have seen the situation, or something very similar to it, enough to make a reasonable calculation of what to do.
The 3 lane to 2 lane problem is solved already. These cars follow a centimeter-accurate 3D map that has the lanes precisely defined (as well as acceptable speeds, location of stoplights, etc).
The Google car knows the lane change is approaching long before it shows up on any sensor.
This isn't about a lane change approaching. This is about people disregarding the concept of lanes when there's fresh snow on the ground because they have no idea where the lanes are.
Not quite. I would define common sense as "a reasonable fallback solution given that the current situation is unfamiliar." This is something AI systems have a LOT of difficulty with, the self-driving car being no exception.
I've been saying this for years, as I took Thrun's class when he was at Stanford. Google's dirty little secret is that self driving cars are still mostly smoke and mirrors. Given relatively controlled conditions and a trained driver who can play backup when needed, they work. But if you put them in complicated situations - snow, a busy city environment, abnormal signage - watch out.
The problem is that the driving model is probabilistic. When you solve a problem probabilistically, getting from 90% covered to 99% to 99.9% covered to 99.99% covered involves exponential leaps in difficulty. So even if the car covers 99.9% of driving conditions (and it currently doesn't), there's still a tremendous amount of work to be done to get it to 99.9999% correct, or whatever the threshold is for it to be deemed "safe" for fully autonomous use.
I personally am bearish on the technology, as getting the inconvenient final situational cases correct will be extremely challenging. I would love to be proven wrong, but at Stanford I came to the opinion that the probabilistic approach would get us to really cool demos, but never a fully autonomous vehicle. That being said, the people working on this are a whole lot smarter than I, and I would love to be proven wrong.
One that Google has not solved, for instance, is navigating a gas station. When they fill up gas at Shoreline & Middlefield in Mountain View, I see humans doing the driving.
Approaching this problem as 'navigating around X' is a wrong way to go about it. Once you solve the problem of navigating a gas station, the next you will face is to drive around a school, or a lane where kids are playing. The list would never end.
The idea must be to come up with a generic algorithm that solves these problems as a whole. Not one specific case at a time.
The gas station is a special case though, because the objective isn't just travel from here to there. Finding a parking space is somewhat similar kind of special case, where there is a specific objective.
I decided to test a Google self driving car as it crossed the intersection by accelerating into its broadside - no reaction at all. Well, got a reaction from the humans inside.
I recently narrowly avoided getting killed by a broadside collision by braking just in time. If I were further I would have sped up out of the way. Would a probabilistic approach handle this? Maybe they need to compile a list of special edge cases.
You can't compile a list of edge cases for this kind of thing, because it is impossible to know the comprehensive list of all the situations the car won't handle correctly.
In the end, you need a learning technology that can properly adapt to any possible situation and give a decent response. Maybe it can be done, but we certainly aren't there yet and I'm skeptical as to the the tractability of the last bit of the problem.
I think they're currently concerned with making sure the vehicle drives safely. Many humans apply evasive maneuvers, only to end up killing someone else or hurting themselves in other ways.
All this shows is that the Google car was driving well, and you weren't. Though I'm sure as the tech progresses they'll look into this sort of thing, and will implement what makes sense.
When an engineer uses the word "trivial," what you should hear is "There are some complications that I'd rather ignore, so let's just hand-wave the answer".
What are these complications? The API's job appears to be to let you iterate through a list of log items/read data from a log buffer/however you want to imagine it. That sort of thing is not rocket science, no matter how difficult it was to make that data in the first place.
(Besides, even if you don't think there's anything wrong with the way it provides the caller with data from the list, there's always the session nonsense to point and gawp at.)
Uh, for starters, the buffer doesn't have infinite size. It will overflow. What is the system supposed to do here? There are a million possibilities (discard old data, discard new data, allocate more memory, write to a file, call a callback, return an error, stall the rest of the system or halt the clock, etc.); some make sense, some don't. Between those that do, the user needs to be able to choose the best option -- and the time-sensitive nature of the log means you can't just do whatever pleases you; you have to make sure you don't deadlock the system. That's not by any means a trivial task, and I'd bet the reason you think it's so easy is that you haven't actually tried it.
Yes, that's reasonable. But I'm not sure how this doesn't just boil down to configuring how the list is built up. You'd still be iterating through the list afterwards.
The system's hands are somewhat tied, I think. The events are building up in kernel mode, so it can't just switch to the callback for each one, not least because the callback might be executing already (possibly it was even the callback that caused whatever new event has been produced). So all it can do, when an event occurs, is add the event to a buffer - handling overflow (etc.) according to the options the caller set - though I don't think a callback is practical as this would involve switching back to user mode - for later consumption by user mode code. In short, it's building up a list, and perhaps the API could reflect that.
This is not to suggest that it would be easy to get to there from here. I've no doubt it could be literally impossible to retrofit an alternative API without rewriting everything. Just that I don't see why in principle an event tracing API can't work in some more straightforward fashion.
> What is the system supposed to do here? There are a million possibilities…
No, there are two: You dump old data or you dump new data. Everything else should be up to the user code. It's really not as difficult as you are making it out to be. There's certainly no excuse for a ridiculous API as described in the article.
Huh? If you dump data you miss events. Imagine if Process Monitor decided to suddenly dump half of the system calls it monitored. Wouldn't that be ridiculous? For a general event-tracing system, there have to be more options provided. Maybe it wouldn't matter so much for context-switching per se, but for a ton of other types of events you really need to track each and every event.
Yes, you miss events. But if you try to make build the kitchen sink into your low-level logging system then it ceases to be low level. If your logging system allocates memory then how can you log events from your VM subsystem? If your logging system logs to the disk, then how do you log ATA events? It becomes recursive and intractable.
The solution is to make your main interface a very simple pre-allocated ring buffer and have userspace take that and do what they please with it (as fast as it can so things don't overflow).
There is always a point at which your logging system can't keep up. At the kernel level you decide which side of the ring buffer to drop (new data or old) and at the userspace level you decide whether to drop things at all or whether to grind the system to a halt with memory, disk, or network usage.
The options are not simply "drop data" or "don't drop data". The options depend on the logging source, because not every logging source requires a fixed-size buffer. The API itself needs to support various logging sources and thus needs to support extensible buffers (e.g. file-backed sources, the way ProcMon does). Whether or not a particular logging source supports that is independent of whether or not the generic logging interface needs to support it.
I think we're talking past each other here. I don't think we're disagreeing on the userspace part. I'm not even implying that the the low level kernel interface should have unconfigurable buffer sizes. They should be configurable, but pre-allocated and non-growable. You're right, the userspace part can do whatever it wants. But I stand by my last paragraph (you either drop or grind things to a halt).
> Huh? If you dump data you miss events. Imagine if Process Monitor decided to suddenly dump half of the system calls it monitored. Wouldn't that be ridiculous?
All sorts of systems have worked like this in the past (search for "ring buffer overwrite"). If you can't assume unlimited storage, you have to make a decision whether it's more important to have the latest data, dropping older samples, or whether it's more important to maintain the range of history by lowering precision (e.g. overwriting every other sample).
> but for a ton of other types of events you really need to track each and every event.
If you really need this, you have to change the design to keep up with event generation. That's outside the scope of a low-level kernel API where performance and stability trump a desire for data.