I've got half an hour's walk to work every morning. It's a great way to start the day, especially because the route is through a couple of parks. It clears the mind and wakes me up. I really can't imagine living in a place where that wasn't possible.
Having worked with Scala a lot recently, I've found that the ability to turn logic issues into type issues is an incredible gain for my productivity. OCaml programmer Yaron Minsky summed it up nicely with the advice that you should "make illegal states unrepresentable". He gives a good example of modelling a network connection in [0].
Another example is instead of shuffling a bunch of bare UUID objects around (or strings, for that matter) in a system where lots of different things have a UUID, I can make a simple reference type for the IDs of different entities. This way, calling a function that takes the UUID of one type of entity with that of another can be a type error that's caught by the compiler instead of a logic error that's caught by a unit test. This is cumbersome at best to do in Java or C, and obviously impossible in Python or Ruby, but in ML-inspired languages it's simply the most natural way to work.
I wouldn't classify those as "type problems" but rather "using types to solve problems". I have no issue with the claim that great type systems make some... ahem... types of problems go away, although it's often trading one type of complexity for another. My issue is with the claim that "long term development is intrinsically better with static typing".
The type system, at least more powerful ones, encode a lot of intention and more importantly, enforce it.
It might add some additional complexity in the first writing of the code, but in return you eliminate whole classes of problems forever. It's not just the initial writing that benefits (at some cost, admittedly), but all future changes won't have those problems either. In the case the types themselves need to change to account for expanded functionality or whatever, you again pay some cost in complexity, but in return every place the new type would cause problems you get a nice error.
The other thing to keep in mind is that whether or not these type are codified in the language they're there conceptually. Just because you have to write them down doesn't necessarily add complexity but is more like forced documentation that can be used to eliminate who classes of problems. It's difficult to see how this wouldn't intrinsically be better than so-called dynamic typing.
If you're talking about a good type system, sure. The type system in Go is frankly terrible at describing the kinds of invariants and reuse we care about.
I'd dare say that diaylizer (Elixir's optional but easy to use type system) would lead to more maintainable code than Go's forced type system in the long run.
A type problem is any problem that types can solve for you. People who are exposed to very limited type systems consider the range of type problems to be limited. Still it is true that most of these problems are solved quickly and caught with unit tests - no one thinks that simple type problems are the source of your production bugs. A good type system is mostly a productivity play for me; I develop a little faster and I refactor MUCH more quickly and with less effort.
I'm pretty familiar with actually-good type systems, and as I've attempted to say, they do eliminate certain classes of problems. They just don't happen to be the types of problems I struggle with or find major productivity-burners. YMMV.
Orthogonal to all this is that I'm underwhelmed by Go's type system :/ If I was going to shoot for a language whose type system really helped me (in the ways I need/want help) it wouldn't be Go. It's pretty slick for some things (aesthetics aside) though.
> The presence of a type system definitely improves maintainability, however it is only one of many factors. Being C and Haskell both statically typed, are they equally suitable for long term development?
They're both statically typed, but the type systems obviously aren't equal. I would argue that stricter types are in fact one of the main things that helps me be more productive in languages with MLish type systems compared to C which lacks e.g. generics and proper sum types.
That said, I find Elixir very interesting for the exact reasons you said.
+ shift registers, amplifiers ("bus drivers"), tri-state thingies, carry generation circuits (for faster ALUs, especially when you connect several ALU chips together for wider additions).
Haiku does not contain any code from BeOS, except the filemanager and launcher, which were open souced by Be before they folded. It is a complete rewrite, done by volunteers.
How much of my private information, as kept by government or private organizations, are stored in Oracle databases that are less secure because of their boneheaded stance on this?
People have all sorts of reasons for choosing Oracle solutions. I am not in a position to influence all those people, even when their choices affect me directly.
Oh yes, I agree. That is why I think it makes sense to tell everyone you can about the issue. However, there is only so much anyone can do about it, and technically it isn't Oracle's fault that people won't listen, only the bugs/vulnerabilities in their products.
A crude example: Imagine you're a janitor. Your company only supplies you with buckets from Leaky Bucket, Inc. Their buckets always leak, creating more messes that you have to clean up. Sure, Leaky Bucket, Inc. needs to fix their bucket processes, but I'd be more angry at the company for continuing to use buckets from a shoddy manufacturer.
Why should you be angry? They are just being stupid. Being stupid is something humans do naturally. Choosing to rely on someone who is verifiably stupid (in your opinion) is significantly worse, I think, than the original stupidity.
edit: And by the way, the very first time you are forbidden from patching a bucket you could patch yourself should be the red flag that tells you to use different buckets. Move on to better things and encourage others to do so too.
I can understand the argument that I shouldn't be upset with my cat because he claws my feet under a blanket; it's his natural instinct.
I can't understand the argument that I shouldn't be upset with a human being because they're stupid, and make stupid decisions. Humans are capable of introspection, education, and change.
I don't think it's unreasonable to expect more of my fellow humans than of my cat.
It is totally fine to be frustrated by humans being stupid, but some humans really do have less cognitive abilities than others. In at least some of those cases it isn't their fault necessarily.
So, I'm just making the point that frustration makes sense, but hate probably doesn't. They certainly aren't intending to be stupid, but it is frustrating that we can't show them the error of their reasoning sometimes.
Yes, but I think the claim was that a persistent TCP connection has a negative effect on battery life, which is not obvious to me. An idle connection should have no impact whatsoever.
Because TCP connections are expensive on your battery life. Especially if your network is flaky, and all those connections need to constantly repeat their handshakes.
Not really, TCP connections are how GCM does things even. Unless you're using keep alives or otherwise preventing sleep, I believe these devices are capable of sending an interrupt when there's activity on a network connection though I could be mistaken.
And on a data connectivity change, e.g. GSM <-> WiFi switch, there is a good chance that a dozen other components start trying to re-establish their connections, which means the radio is awake anyway (for, usually, at most a few seconds).
> Idle TCP connections do not consume any battery.
An idle connection doesn't consume anything, but a useful idle connection will have some keepalive every X minutes. Multiply that by the number of connections your application will have and by the number of background application you run on your phone, and the radio will never truly sleep or enter the "low" mode. The solution wanted by Google and Apple is to maintain a single connection to their server on which every push is aggregated and sent, and that the OS itself manages.
I use my mobile XMPP connections without a TCP keepalive but send a server ping if there has been stanzas received in the last 30 minutes and get a useful and reliable XMPP connection without a noticeable impact on battery. If you use Android's AlarmManager to trigger the check, then Android will even take care of scheduling the "alarm" with other alarms for efficiency.
> Multiply that by the number of connections your application will have and by the number of background application you run on your phone, and the radio will never truly sleep or enter the "low" mode.
Why would X x Y idle connections (X: TCP connections per app, Y: applictions) prevent your radio from sleeping?
> Why would X x Y idle connections (X: TCP connections per app, Y: applictions) prevent your radio from sleeping?
In the worst case, if the pings aren't sent at the same moment and all applications have a timeout set to, let's say, 30 minutes, your phone will send a ping every 30/(X*Y) minutes. I don't know what the usual X and Y would be, but if they're high enough your radio will always be up (check out these slides, 23 in particular: https://www.igvita.com/slides/2013/io-radio-performance.pdf)
And that's only counting idle applications. If you have one that periodically polls a server, it gets even worse.
Course, you shouldn't poll and if those mechanisms aren't coordinated, then you you will suffer. But that's why Android provides the AlarmManager API and I would expect other mobile platforms to provide something similar.