Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There’s something that bothers me about these sorts of recollections that make git seem… inevitable.

There’s this whole creation myth of how Git came to be that kind of paints Linus as some prophet reading from golden tablets written by the CS gods themselves.

Granted, this particular narrative in the blog post does humanise a bit more, remembering the stumbling steps, how Linus never intended for git itself to be the UI, how there wasn’t even a git commit command in the beginning, but it still paints the whole thing in somewhat romantic tones, as if the blob-tree-commit-ref data structure were the perfect representation of data.

One particular aspect that often gets left out of this creation myth, especially by the author of Github is that Mercurial had a prominent role. It was created by Olivia Mackall, another kernel hacker, at the same time as git, for the same purpose as git. Olivia offered Mercurial to Linus, but Linus didn’t look upon favour with it, and stuck to his guns. Unlike git, Mercurial had a UI at the very start. Its UI was very similar to Subversion, which at the time was the dominant VCS, so Mercurial always aimed for familiarity without sacrificing user flexibility. In the beginning, both VCSes had mind share, and even today, the mindshare of Mercurial lives on in hg itself as well as in worthy git successors such as jujutsu.

And the git data structure isn’t the only thing that could have ever possibly worked. It falls apart for large files. There are workaround and things you can patch on top, but there are also completely different data structures that would be appropriate for larger bits of data.

Git isn’t just plain wonderful, and in my view, it’s not inevitable either. I still look forward to a world beyond git, whether jujutsu or whatever else may come.



I'm curious why you think hg had a prominent role in this. I mean, it did pop up at almost exactly the same time for exactly the same reasons (BK, kernel drama) but I don't see evidence of Matt's benchmarks or development affecting the Git design decisions at all.

Here's one of the first threads where Matt (Olivia) introduces the project and benchmarks, but it seems like the list finds it unremarkable enough comparatively to not dig into it much:

https://lore.kernel.org/git/Pine.LNX.4.58.0504251859550.1890...

I agree that the UI is generally better and some decisions where arguably better (changeset evolution, which came much later, is pretty amazing) but I have a hard time agreeing that hg influenced Git in some fundamental way.


[flagged]


"One particular aspect that often gets left out of this creation myth, especially by the author of Github is that Mercurial had a prominent role." implies to me that Hg had a role in the creation of Git, which is why I was reacting to that.

For the deadnaming comment, it wasn't out of disrespect, but when referring to an email chain, it could otherwise be confusing if you're not aware of her transition.

I wasn't sponsoring hg-git, I wrote it. I also wrote the original Subversion bridge for GitHub, which was actually recently deprecated.

https://github.blog/news-insights/product-news/sunsetting-su...


> For the deadnaming comment, it wasn't out of disrespect, but when referring to an email chain, it could otherwise be confusing if you're not aware of her transition.

I assumed it was innocent. But the norm when naming a married woman or another person who changed their name is to call them their current name and append the clarifying information. Not vice versa. Jane Jones née Smith. Olivia (then Matt).


> Please don't do that. Don't deadname someone.

Is this not a case where it is justified, given that she at that time was named Matt, and it's crucial information to understand the mail thread linked to? I certainly would not understand at all without that context.


The proper way to do that is say, something like "Olivia (Matt)" and then continue. You use the preferred name, and if you need to refer to the deadname to disambiguate, you do it.

If you can avoid the need to disambiguate, you do that too. The name really is dead. You shouldn't use it if at all possible.


Wait a second. You're saying now hg didn't influence git, but how does that fit with your previous comment?

> One particular aspect that often gets left out of this creation myth, especially by the author of Github is that Mercurial had a prominent role

I'm not sure where you're getting your facts from.


Mercurial had a prominent role in the creation myth. It didn't influence git, but it was there at the same time, for the same reason, and at one time, with an equal amount of influence. Bitbucket was once seen as fairly comparable to Github. People would choose git or hg for their projects with equal regularity. The users were familiar with both choices.

Linus never cared about hg, but lots of people that cared about git at one point would also be at least familiar with some notions from hg.


A lot of the ideas around git were known at this time. People mentioned monotone already. Still, Linus got the initial design wrong by computing the hash of the compressed content (which is a performance issue and also would make it difficult to replace the compression algorithm). Something I had pointed out early [1] and he later changed it.

I think the reason git then was successful was because it is a small, practical, and very efficient no-nonsense tool written in C. This made it much more appealing to many than the alternatives written in C++ or Python.

[1]: https://marc.info/?l=git&m=111366245411304&w=2


And because of Linux offering free PR for git (especially since it was backed by the main Linux dev).

Human factors matter, as much as programmers like to pretend they don't.


> since it was backed by the main Linux dev

For “backed by” read “initially written by”.

I don't particularly remember Linus making any push for git to be generally popular. While he was more than happy for other projects to use it and be his testing resource, his main concern was making something that matched his requirements for Linux maintenance. BitKeeper was tried and worked well¹, but there were significant licensing issues that caused heated discussion amongst some of the big kernel contributors (which boiled over into flame-wars more than once or twice), and those were getting worse rather than going away².

A key reason for Linus trying what he did with Git, rather than using one of the other open options that started around the same time or slightly before, was that branching and merging source trees as large as Linux could be rather inefficient in the others — this was important for the way Linux development was being managed.

Of course most other projects don't have the same needs as Linux, but git usually wasn't bad for them either and being used by the kernel's management did give it momentum from two directions: those working on the kernel also working on other projects and using it there too (spreading it out from within), and people further out thinking “well, if they use it, it must be worth trying (or trying first)” so it “won” some headspace by being the first DVCS people tried³, and they didn't try others like mercurial or fossil because git worked well (or well enough) so they just didn't get around to trying the others⁴ that would have worked just as well for them.

----

[1] Most people looking back seem to think/imply that BK was a flash in the pan, but Linus used it for a full couple of years.

[2] A significant problem that caused the separation, rather than it being because BK was technically deficient in some way for the Linux project, was people reverse engineering the protocol to get access to certain metadata that would otherwise have required using the paid version to see, which the BK owners were not at all happy about.

[3] So yes, human factors, but less directly related to one particular human that is Linus, more the project he was famous for.

[4] That sounds a lot more dismissive than I intended. Of course many did try multiple and found they preferred git, as well as those who did the same but went with one of the others because they were a better match for the needs of that person/project.


> I don't particularly remember Linus making any push for git to be generally popular.

Outside of giving one of the highest visibility tech talks in history, at Google (back when Google was the mega hip FAANG), declaring Subversion (the then leading SCM) brain dead?

Marketing works in many different ways, as does signaling. Geeks wear suits, too, their suits just aren't composed of suit jackets and suit pants, they're composed of t-shirts and jeans.


> declaring Subversion (the then leading SCM) brain dead?

I remember that more as berating the incumbent leader in non-distributed VCSs, than promoting a specific DVCS, and that git wasn't mature at that point (the move from BK had not happened). Though maybe my remembered timeline is muddled, do you have further reference to that talk so I can verify details?



I do think an open source, distributed, content addressable VCS was inevitable. Not git itself, but something with similar features/workflows.

Nobody was really happy with the VCS situation in 2005. Most people were still using CVS, or something commercial. SVN did exist, it had only just reached version 1.0 in 2004, but your platforms like SourceForge still only offered CVS hosting. SVN was considered to be a more refined CVS, but it wasn't that much better and still shared all the same fundamental flaws from its centralised nature.

On the other hand, "distributed" was a hot new buzzword in 2005. The recent success of Bittorrent (especially its hot new DHT feature) and other file sharing platforms had pushed the concept mainstream.

Even if it wasn't for the Bitkeeper incident, I do think we would have seen something pop up by 2008 at the latest. It might not have caught on as fast as git did, but you must remember the thing that shot git to popularity was GitHub, not the linux kernel.


Yeah I think people that complain about git should try running a project with CVS or subversion.

The amazing flexibility of git appears to intimidate a lot of people, and many coders don't seem to build up a good mental model of what is going on. I've run a couple of git tutorials for dev teams, and the main feedback I get is "I had no idea git was so straightforward".


> There’s this whole creation myth of how Git came to be that kind of paints Linus as some prophet reading from golden tablets written by the CS gods themselves.

Linus absolutely had a couple of brilliant insights:

1. Content-addressable storage for the source tree.

2. Files do not matter: https://gist.github.com/borekb/3a548596ffd27ad6d948854751756...

At that time, I was using SVN and experimenting with Hg and Bazaar. Both were too "magical" for me, with unclear rules for merging, branching, rebasing.

Then came git. I read its description "source code trees, identified by their hashes, with file content movement deduced from diffs", and it immediately clicked. It's such an easy mental model, and you can immediately understand what operations mean.


> 2. Files do not matter

I wish weekly for explicit renames.

> At that time, I was using SVN and experimenting with Hg and Bazaar. Both were too "magical" for me, with unclear rules for merging, branching, rebasing.

I have no idea what you mean.

> It's such an easy mental model, and you can immediately understand what operations mean.

Many people disagree clearly.


> I wish weekly for explicit renames.

You can do that in git. `git mv` stores a hint in the commit that a file has been moved.

You just don't _have_ to do it.


> `git mv` stores a hint in the commit that a file has been moved.

No. It appears in git status but is not committed. And it disappears from git status if the file is modified enough.


Another alternative is the patch-theory approach from Darcs and now Pijul. It's a fundamentally different way of thinking about version control—I haven't actually used it myself but, from reading about it, I find thinking in patches matches my natural intuition better than git's model. Darcs had some engineering limitations that could lead to really bad performance in certain cases, but I understand Pijul fixes that.


I was a bit confused about the key point of patch-based versus snapshot-based, but I got some clarity in this thread: https://news.ycombinator.com/item?id=39453146


The Patch Theory of darcs and CRDTs (and the middle idea of OTs [Operational Transforms]) are all interestingly related in their early research and early cross-communication. It certainly is fascinating that today it is probably easier to ask "Do you ever think you might want a CRDT for source control?" to explain some of why you might want patch-based over snapshot-based, because CRDTs exist in part because it somewhat was started as "what if you could do something like darcs but with general data, not just source control?" It's fascinating which technologies win and in which ways/places/niches.


The article is written by a co-founder of github and not Linus Torvalds.

git is just a tool to do stuff. It's name (chosen by that Finnish bloke) is remarkably apt - its for gits!

It's not Mecurial, nor github, nor is it anything else. Its git.

It wasn't invented for you or you or even you. It was a hack to do a job: sort out control of the Linux kernel source when Bit Keeper went off the rails as far as the Linux kernel devs were concerned.

It seems to have worked out rather well.


> there are also completely different data structures that would be appropriate for larger bits of data.

Can you talk a little bit about this? My assumption was that the only way to deal with large files properly was to go back to centralised VCS, I'd be interested to hear what different data structures could obviate the issue.


One way to deal with large binary files is git-annex, it is as decentralized as git. But I dare say it lost to git-lfs, because Github and co weren't interested in hosting it.


In early 2000s I was researching VCSs for work and also helping a little developing arch, bazaar then (less so) bzr. I trialed Bitkeeper for work. We went with Subversion eventually. I think I tried Monotone but it was glacially slow. I looked at Mercurial. It didn't click.

When I first used Git I thought YES! This is it. This is the one. The model was so compelling, the speed phenomenal.

I never again used anything else unless forced -- typically Subversion, mostly for inertia reasons.


> There’s this whole creation myth of how Git came to be that kind of paints Linus as some prophet reading from golden tablets written by the CS gods themselves.

What?

> Git isn’t just plain wonderful, and in my view, it’s not inevitable either.

I mean, the proof is in the pudding. So why did we end up with Git? Was it just dumb luck? Maybe. But I was there at the start for both Git and Mercurial (as I comment elsewhere in this post). I used them both equally at first, and as a Python aficionado should've gravitated to Mercurial.

But I like to understand how tools work, and I personally found Mercurial harder to understand, slower to use, and much less flexible. It was great for certain workflows, but if those workflows didn't match what you wanted to do, it was rigid (I can't really expound on this; it's been more than a decade). Surprisingly (as I was coding almost entirely in Python at the time), I also found it harder to contribute to than Git.

Now, I'm just one random guy, but here we are, with the not plain wonderful stupid (but extremely fast) directory content manager.


> But I like to understand how tools work, and I personally found Mercurial harder to understand, slower to use, and much less flexible.

It's a relief to hear someone else say something like this, it's so rare to find anything but praise for mercurial in threads like these.

It was similar for me: In the early/mid 2010s I tried both git and mercurial after having only subversion experience, and found something with how mercurial handled branches extremely confusing (don't remember what, it's been so long). On the other hand, I found git very intuitive and have never had issues with it.


Good point. Git succeeded in the same way that Unix/Linux succeeded. Yes, it sucks in many ways, but it is flexible and powerful enough to be worth it. Meanwhile, something that is "better" but not flexible, powerful, or hackable is not evolutionarily successful.

In fact, now that I've used the term "evolution", Terran life/DNA functions much the same way. Adaptability trumps perfection every time.


For me, the real problem at the time is that "rebase" was a second class feature.

I think too many folks at the time thought that full immutability was what folks wanted and got hung up on that. Turns out that almost everyone wanted to hide their mistakes, badly structured commits, and typos out of the box.

It didn't help that mercurial was slower as well.


Don’t forget Fossil, which started around the same time…

https://fossil-scm.org/home/doc/trunk/www/history.md


>And the git data structure... falls apart for large files.

I'm good with this. In my over 25 years of professional experience, having used cvs, svn, perforce, and git, it's almost always a mistake keeping non-source files in the VCS. Digital assets and giant data files are nearly always better off being served from artifact repositories or CDN systems (including in-house flavors of these). I've worked at EA Sports and Rockstar Games and the number of times dev teams went backwards in versions with digital assets can be counted on the fingers of a single hand.


Are CAD data not sources in of themselves?

My last CAD file was 40GiB, and that wasn't a large one.

The idea that all sources are text means that art is never a source, and that many engineering disciplines are excluded.

There's a reason Perforce dominates in games and automotive, and it's not because people love Perforce.


I think this conflates "non-source" with "large". Yes, it's often the case that source files are smaller than generated output files (especially for graphics artifacts), but this is really just a lucky coincidence that prevents the awkwardness of dealing with large files in version control from becoming as much of a hassle as it might be. Having a VCS that dealt with large files comfortably would free our minds and open up new vistas.

I think the key issue is actually how to sensibly diff and merge these other formats. Levenshtein-distance-based diffing is good enough for many text-based formats (like typical program code), but there is scope for so much better. Perhaps progress will come from designing file formats (including binary formats) specifically with "diffability" in mind -- similar to the way that, say, Java was designed with IDE support in mind.


Non-source files should indeed never be in the VCS, but source files can still be binary, or large, or both. It depends on how you are editing the source and building the source into non-source files.


Also, some source files that could otherwise be treated as text⁰ end up effectively being binary blobs because tools don't write them in a stable order, which makes tracking small changes difficult because you can't see that they actually are small changes. A number of XML formats¹, and sometimes JSON & others, have this issue too.

----

[0] for the purposes of change tracking and merging

[1] Stares aggressively at SSIS for its nasty package file format² and habit of saving parts of it in different orders apparently randomly so updating the text of an annotation can completely rearrange the saved file

[2] far from the only crime committed by SSIS I know, but one occasionally irritating enough to mention


Could you use git pre-commit hooks or something similar to transform the files by deterministically sorting the items at each level?

Diffoscope does something similar, diff sorted stuff first, then if there are no changes, then report that, and show the unsorted diffs.

https://diffoscope.org/ https://try.diffoscope.org/


> Could you use git pre-commit hooks

Possibly, though I might be concerned that the format has ordering oddities that it is unexpectedly sensitive to. Unlikely, but given how many other oddities DTS/SSIS has collected over the years I'd not be surprised!

Also, we weren't using Git in DayJob at the time we were actively developing with SSIS (maybe VSTS had an equivalent we could have used?), and we are now acting to remove the last vestiges of it from our workflows rather than spending time making it work better with them!


OMG! Please don't remind me about trying to source control SSIS. One tiny change cascades into 1000 lines of source being different. Total nightmare.


I am rooting for pijul.


I just wish they'd extend git to have better binary file diffs and moved file tracking.

Remembering the real history matters, because preserving history is valuable by itself, but I'm also really glad that VCS is for most people completely solved, there's nothing besides Git you have to pay attention to, you learn it once and use it your whole career.


> I just wish they'd extend git to have better binary file diffs

It's not built-in to git itself, but I remember seeing demos where git could be configured to use an external tool to do a visual diff any time git tried to show a diff of image files.

> and moved file tracking.

Check out -C and -M in the help for git log and blame. Git's move tracking is a bit weirder than others (it reconstructs moves/copies from history rather than recording them at commit), but I've found it more powerful than others because you don't need to remember a special "move" or "copy" command, plus it can track combining two files in a way others can't.


It demonstrably can't detect all moves, Torvalds' theory that you didn't need moves tracking was bulls*it.


> but I'm also really glad that VCS is for most people completely solved, there's nothing besides Git you have to pay attention to, you learn it once and use it your whole career.

From what I hear most current new developers never really learn git, they learn a couple features of some git GUI.

And it's understandable, you're really understating what learning git (one of the messiest and worst documented pieces of software ever) well entails.

I find it a disgrace that we're stuck at git, actually.


That definitely describes me. I can use the Git CLI if needed, because the Git Book exists, but I almost never need to.

If I had to actually use git on the CLI every day, I would probably complain a lot, but it's a pretty good experience when you're using Git Cola and GitHub.

It would be nice if it had native discussions, issues, and wikis, like Fossil does, having that all, decentralized seems like a good idea though.


I was always under the impression Monotone - which was released two years before Mercurial - was the inspiration for git, and that this was pretty well known.


This is all fairly speculative, but I didn't get the impression that Monotone was a main inspiration for Git. I think BitKeeper was, in that it was a tool that Linus actually liked using. Monotone had the content addressable system, which was clearly an inspiration, but that's the only thing I've seen Linus reference from Monotone. He tried using it and bailed because it was slow, but took the one idea that he found interesting and built a very different thing with that concept as one part of it is how I would interpret the history between these projects.


Linus was definitely aware of and mentioned Monotone. But to call it an inspiration might be too far. Content Addressable Stores were around a long time before that, mostly for backup purposes afaik. See Plan9's Venti file system.


Yes, Monotone partly inspired both. You can see that both hash contents. But both git and hg were intended to replace Bitkeeper. Mercurial is even named after Larry McVoy, who changed his mind. He was, you know, mercurial in his moods.


The file-tree-snapshot-ref structure is pretty good, but it lacks chunking at the file and tree layers, which makes it inefficient with large files and trees that don't change a lot. Modern backup tools like restic/borg/etc use something similar, but with chunking included.


If you haven't seen Larry McVoy's issues with git, I think the list still has some weight.

https://news.ycombinator.com/item?id=40870840


Do you happen to know what Linus didn't like about Mercurial?


I wonder this too. My guess is that he did not like "heavyweight" branching and the lack of cherry-pick/rebase. At any rate that is why I didn't like it back then.

Sun Microsystems (RIP) back then went with Mercurial instead of Git mainly because Mercurial had better support for file renames than Git did, but at Sun we used a rebase workflow with Mercurial even though Mercurial didn't have a rebase command. Sun had been using a rebase workflow since 1992. Rebase with Mercurial was wonky, but we were used to wonky workflows with Teamware anyways. Going with Mercurial was a mistake. Idk what Oracle does now internally, but I bet they use Git. Illumos uses Git.


I watched that whole process with fascination. It was long, careful, thorough ... and chose wrong.

A part of me thinks that there was a Sun users aversion to anything Linux related.


> A part of me thinks that there was a Sun users aversion to anything Linux related.

It wasn't that. It really was just about file renaming.


Git ignored that particular Gordian knot by ignoring it. Many VCSs agonised over file identity. They overestimated the importance and underestimated its difficulty. It's basically impossible in the general sense.


Ironically hg now has better rebasing than git e.g. the evolve extension


Ah, right, at Sun we used MQ. But anyways, just glancing at the hg evolve docs I'm unconvinced. And anyways, it's an "extension". Mercurial fought rebase for a long time, and they did dumb things like "rebase is non-interactive, and histedit is for when you want to edit the history".

And that is partly why Mercurial lost. They insisted on being opinionated about workflows being merge-based. Git is not opinionated. Git lets you use merge workflows if you like that, and rebase workflows if you like that, and none of this judgment about devs editing local history -- how dare the VCS tell me what to do with my local history?!


A lot of things in Mercurial kind of geared you towards using it more like Subversion was used. You pretty much could use Mercurial just like git was, and is, used, but the defaults didn't guide you to that direction.

One bigger difference I can think of is, Mercurial has permanently named branched (branch name is written in the commit), whereas in git branches are just named pointers. Mercurial got bookmarks in 2008 as an extension, and added to the core in 2011. If you used unnamed branches and bookmarks, you could use Mercurial exactly like git. But git was published in 2005.

Another is git's staging area. You can get pretty much the same functionality with repeatedly using `hg commit --amend` but again, in git the default gears you towards using the staging approach, in Mercurial you have specifically search for a way to get it to function this way.


I may be misremembering but c vs python was a part of it. I don't think Linus thought too highly of python, or any interpreted languages, except shell perhaps, and didn't want to deal with installing and managing python packages.


> and didn't want to deal with installing and managing python packages.

Based on the fact that ecosystem torpedoed an entire major version of the language, and that there are a bazillion competing and incompatible dep managers, it seems that bet turned out well


I like Python and hate to admit it but you’re right.


Linus worried Mercurial was similar enough to BitKeeper that BitMover might threaten people who worked on it. Probably he had other complaints too.


> blob-tree-commit-ref data structure were the perfect representation of data

Is it not? What are the alternatives?


[flagged]


> Git is for hobby side projects. Perforce and Sapling are for adult projects.

The numbers I've seen say git has about 87-93% of the version control market share. That's just one of many reasons I think it is safe to say most professional developers disagree with you. I can understand someone prefering Perforce for their workflow (and yes, I have used it before). But saying Git is only "for hobby side projects" is just ridiculous. It has obviously proven its value for professional development work, even if it doesn't fit your personal taste.


A local minimum is a point in the design space from which any change is an improvement (but there's other designs which would be worse, if they make several larger changes). I think it's hard to make that claim about Git. You're probably referring to a local maximum, a point in the design space from which any change makes it better (but there's other designs which would be better, if they make several larger changes).

In my career, I've used Svn, Git and something I think it was called VSS. Git has definitively caused less problems, it's also been easy to teach to newbies. And I think the best feature of Git is that people really really benefit from being taught the Git models and data structures (even bootcamp juniors on their first job), because suddenly they go from a magic incantation perspective to a problem-solving perspective. I've never experienced any other software which has such a powerful mental model.

That of course doesn't mean that Mercurial is not better; I've never used it. It might be that Mercurial would have all the advantages of git and then some. But if that were so, I think it would be hard to say that Git is at a local maximum.


> something I think it was called VSS

Hmm, maybe Microsoft Visual Source Safe? I remember that. It was notorious for multiple reasons:

* Defaulted to requiring users to exclusively 'check out' files before modifying them. Meaning that if one person had checked out a file, no one else could edit that file until it was checked in again.

* Had a nasty habit of occasionally corrupting the database.

* Was rumored to be rarely or not at all used within Microsoft.

* Was so slow as to be nearly unusable if you weren't on the same LAN as the server. Not that a lot of people were working remotely back then (i.e. using a dial-up connection), but for those who were it was really quite bad.


> it's also been easy to teach to newbies

The number of guides proclaiming the ease of Git is evidence that Git is not easy. Things that are actually easy do involve countless arguments about how easy they are.

I can teach an artist or designer who has never heard of version control how to use Perforce in 10 minutes. They’ll run into corner cases, but they’ll probably never lose work or get “into a bad state”.


> A local minimum is [...]

Unless you're in ML, in which case it's a minimum of the loss function, not the utility function...


> You're probably referring to a local maximum, a point in the design space from which any change makes it better (but there's other designs which would be better, if they make several larger changes).

I think you meant "worse" for that first "better."


Git being easy to teach to newbies is an uncommon opinion. It was not clear if you meant easier than Subversion. But this would be even more uncommon.


> I've never experienced any other software which has such a powerful mental model.

I hate to be that guy, but you should spend some time with jj. I thought the same, but jj takes this model, refines it, and gives you more power with fewer primitives. If you feel this way about git, but give it an honest try, I feel like you'd appreciate it.

Or maybe not. Different people are different :)


> In my 18 years of professional dev I’ve never actually used Git professionally. Git is for hobby side projects. Perforce and Sapling are for adult projects.

I have encountered Perforce, Mercurial, and git professionally throughout my career. Considering the prominence of git in the market, it must be obvious that git does some combination of things right. I myself have found git to be solid where the other salient choices have had drawbacks.

The use of git it so widespread that it is hardly a local minimum.


*local maximum


Mercurial had its chance, and it blew it by insisting that the history is immutable and branches are heavy-weight objects (SVN-style). Turns out that's not what people want, so git won.

(And the fact that Mercurial supports history editing _now_ is irrelevant, that ship has long sailed.)


Everything is a local minima, given that you can't exhaustively prove otherwise for anything beyond the most trivial domains.


> And it’s doubly tragic because Mercurial is much better than Git. It’s VHS vs Betamax all over again.

VHS won because it was cheaper and could record longer. Fidelity was similar at the recording speeds people used in practice.


In my 25ish years I’d professional dev, I’ve used git for about 15 of them. I’ve never used Perforce and never even heard of Sapling.

It’s very likely that most if not all of the software stack you’re using to post your comment is managed with git.


I am keenly aware of how common Git is.

A whole generation of programmers have only ever known Git and GitHub. They assume that since it is the standard it must be good. This is a fallacy.

Bad things can become popular and become entrenched even when better things exist. Replacing Git today would require something not just a little better but radically better. Git was always worse than Mercurial. It won because of GitHub. If MercurialHub had been invented instead we’d all be using that and would be much happier. Alas.


> Git was always worse than Mercurial.

hard disagree. Git was always way better than Mercurial.

> It won because of GitHub.

I and most of the developers I have worked with over the years all used git for many years before ever even trying GitHub. GitHub obviously has helped adoption, but I'm not convinced that git would not have won even if GitHub had never existed.


> Git was always way better than Mercurial.

How is Git better than Mercurial in any way nevermind "always way better". Serious question.

I'd possibly accept "the original Mercurial implementation was Python which was slow". And perhaps that's why Git won rather than GitHub. But I don't think so.


So none of the stuff you’re using qualifies as “adult projects”?

I don’t object to saying we can do better than git. But saying git “is for hobby side projects” is ridiculous. It’s fine for serious projects.


Depends on how snarky I’m feeling. I’m not sure that webdev counts as wearing big boy pants if I’m feeling frisky! It’d be interesting to analyze how much of the stack used Git as its source of truth versus being mirrored by a BigTech adult VCS!

Git sucks for serious projects. It’s certainly what many people use. But I don’t think “we can do better” is a strong enough statement. Git is bad and sucks. It’s functional but bad. We can do much much better than Git.

I like to rant about Git because we will never do better unless people demand it. If people think it’s good enough then we’ll never get something better. That makes me sad.


I work at a big tech company. Everything is git. I’m not just referring to the web bits, I mean the whole OS down to the kernel.


I’m sorry for your loss :(


> Git sucks for serious projects.

again, hard disagree. I work on serious projects all day long. Git is fabulous for serious projects.


I hope that someday you get to experience a VCS tool that doesn't suck.


What are we missing?


> Git was always worse than Mercurial. It won because of GitHub. If MercurialHub had been invented instead we’d all be using that and would be much happier.

MercurialHub was invented. It’s called Bitbucket, it was founded around the same time as GitHub, and it started out with Mercurial. People wanted Git, so Bitbucket was forced to switch to Git.


> People wanted Git

No. If Bitbucket competed with MercurialHub then MercurialHub would still have won and we’d all be much happier today.


Bitbucket was your desired MercurialHub. They supported Mercurial first. Then they added Git support because people wanted Git. At that point, Bitbucket users could choose between Git and Mercurial. They chose Git, so much so that Bitbucket stopped supporting Mercurial.

Look, I get that you hate Git, but people had the choice of using Mercurial or Git – on the same platform even – and they overwhelmingly chose Git. You claiming that Mercurial would win in a fair fight ignores the fact that that fight did actually happen and Mercurial lost.


You’re acting like the only different between GitHub and Bitbucket was the choice of VCS tool. That’s obviously not even remotely true. GitHub was a superior platform that was more pleasant to use than Bitbucket. The choice of Git had nothing to do with that.

GitHub won. Not Git. IMHO.


The only difference between using Bitbucket with Mercurial and using Bitbucket with Git was the choice of VCS tool. And people chose Git.

> If MercurialHub had been invented instead we’d all be using that

This existed. We aren’t.


I used both. I disrespectfully disagree.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: