> Anyways, I plan to eventually open source it. There's a lot of stuff in the code I want to clean up (things like setting up the build environment, some of the uglier hacks, etc.) before I feel comfortable releasing it to the public
I hate seeing this. We all write some ugly hacks. Just release it and work on it in public, it's fine.
I get what you're saying but I only like to release to the public something that actually works.
Sometimes when I have just hacked something together it is full of hardcoded paths for my local system as I never planned for it to be anything more.
I made the mistake once of putting some code up that was full of machine specific and OS install specific stuff as several people asked. I ended up getting a tonne of emails moaning at me for "releasing" something that doesn't work for them. They didn't want to have to figure out every little hardcoded hack and demanded I fix it. Often very rudely.
That's the problem with "working on it in public" when it doesn't actually work anywhere other than your dev machine. People don't see it as a 'work in progress' and so will bitch and moan when it doesn't build for them.
And trust me developer users can be even worse than normal users as not only do they bitch about something not working then will often resort to name calling and such about the quality of the code.
Sure many understand it was a weekend hack project but there are always a few that will really pull you down and not everyone can handle such personal attacks when all they are trying to do is share some code for something they think is cool.
> Sometimes when I have just hacked something together it is full of hardcoded paths for my local system as I never planned for it to be anything more.
Apart from everything else you said, publishing local path could also pose a security risk.
The compilation works on their system to produce a binary sure but that doesn't mean he can just upload the source tree and it will work for anyone else out of the box.
When I talk about releasing something that "actually works" (as I put it) I mean a simple git clone, make, make install (or equivalent).
Not having to include a readme that explains you need to search and replace the files to remove hardcoded paths, install some old library because my machine wasn't up to date and it doesn't compile with the latest version as I was lazy and hardcoded specific version numbers in places for things as something else didn't work and I didn't want to spend half an hour trying to find some old Windows 2000 DLL, etc.
I am guessing based on my own experience here but developing plugins for Visual C++ 6 on Windows 2000 is a bit of a pain in the ass in 2021. It was a pain in the ass back in the early 2000s when I had access to everything on MSDN.
Sure you could make the point that this tool is designed for people using VC++6 so they must have a copy of that but they may not have any other ancient libraries this tool is built with which is often the biggest obstacle when working with old proprietary software like this.
I support the authors decision to keep things to themselves until they have been able to tidy it up then release something that is easier to build so they're not overwhelmed with "the build isn't working as it says I need x.y.z" issues.
I have plenty of utilities that I've written over the years that work perfectly on my machine, but I know they'll probably fail on some else's due to hardcoded file paths and assumptions as to what libraries are installed.
The thing is, there's a big difference between writing something for your use case, and writing something for others.
Nope. Because as I said, they wouldn't work, due to the hardcoded file paths etc.
Expanding on this further - I have a few apps I've written out there in the wild, and for one (which also found it's way onto a magazine cover CD-ROM back in the day) I had immense pressure to open source it, so against my better judgement I did. I then had to endure lots of negative feedback about the quality of my code. The whole experience put me off going open-source ever again[1].
I am a hobbyist coder, and my code quality will never be 'up there' with the professionals, but the moment you release something (even for free) people expect it to be perfect, which is unrealistic.
---
[1] Not quite true, I do have a GitHub with a few tiny repos, but nothing important.
I guess people make projects like this to show off on their CV, and they’re worried that bad code could have the opposite effect. That said, in hiring, I don’t mind seeing bad code - it gives us a talking point about the constraints involved, and how you’d improve it.
I'd much rather hire someone who shows how to manage a mess, than someone who tries to avoid that mess.
Mess in code is inevitable. Taking on technical debt is fine, knowingly leaving certain parts messy is fine too. Provided you do so knowingly, and manageable.
So, indeed: I very much prefer to see someone who has horrible code but can explain why it wasn't a problem there, or how they refactored out of it into a beauty, than someone who has one "initial commit" that is that beauty immediately.
Edit: I've worked in places where there was a mess-paralysis. An in-grained fear for breaking stuff (but not the prowess to solve that with better CI, tests etc). Where we could not react to market or customer demands, where all progress was halted by that fear to have to go into "The Mess" and break stuff, or introduce New Mess.
I tend to overthink and hyperreflect when it comes to organisation and structure. Most of the time this doesn’t really help. One of my goals in the last year or so was to reduce that.
A very simple but effective thing I started doing is writing „cleanup“ comments. Duplication, ad-hoc coupling and such.
In the comments I sometimes decribe how one could go about it, so I remember later what the issue might be.
There are several benefits to this. For one, prioritizing what matters in the moment. Letting things unfold for real, so the right structure becomes more obvious later. Also it sometimes sparks little discussions that are useful and reveal a pragmatic way forward.
3. You write another duplicate of #1, so you should refactor. But the PM doesn't understand why you're changing the code you made in #1 and #2 as part of ticket #3.
So the jaded devs are certain the duplication will occur, but believe their choice is either to pre-emptively fix the duplication at #1, or never to fix it at all.
> But the PM doesn't understand why you're changing the code you made
If that is the case, the PM does not understand red-green-refactor, always-refactoring, or even just plain refactoring. I'd say the PM doesn't even understand the idea of technical debt, in that case.
The problem is certainly the PM's disconnect to how to keep a codebase (and their team) sane in the most common programming practices. And not the refactoring itself.
I agree with that assessment on the surface. But it's a sign of a generally unhealthy team, not just a poor PM (although the poor PM might be the root cause).
I have a ton of public code that I link to in my resume and not since my first job out of school has any interviewer looked at it and asked me questions about it.
Even the people who want you to do a coding project would not take my existing code over rewriting from scratch for their purposes. (And yeah on the one hand that's fair, but also I did not go through with those interviews.)
> We all write some ugly hacks. Just release it and work on it in public, it's fine.
I think we here underestimate the amount of people that assume that the code found on GH is production ready, or at least people assume the author thinks it's prod. You'll basically have to pre-populate an entire kanban of "yes I'm aware this is broken, don't make another bug about it", and making that digestible for the public may well just take as long as just cleaning everything up.
There really is a level of entitlement you find, often in the form of "I based a project on this thing, now this bug is blocking me, so you are blocking me until you fix this bug you made". And yeah, you can just say "sucks to be you" but is it worth the hassle?
Why do you care about people emailing you at all ? It doesn't matter what you publish or in which state you do it, you will always get a shitton of emails ranging from "but, will this work on my Xbox?" (for a desktop program) to "I see you are expert in $RANDOM_TOPIC, would you kindly help with my homework coding assignment?"
If people would email me about "Hey, I couldn't built this on my system because of Y" or the like, I would actually be very happy.
Signal-to-noise ratio (I'm talking about bug tracking specifically). If you want to any kind of feedback, you have to take all of it, and if publish in a "rough" state then that noise ratio will be high.
My point is that the feedback will be mostly crap anyway, and actually releasing it in an unusable state (which is far from the truth) would not change that, or may even increase the quality of the feedback as a filter.
I second this. If your goal is to release it publically, chances are most of your dirty laundry and uglier hacks are either pain points that are systemically hard (installation, different OS versions/platform, dependencies, build environment) that everyone suffers with in which case the community can help, or it's in a part of the code that migth not be a core competency of yours, and so the community can also help.
> chances are most of your dirty laundry and uglier hacks are either pain points that are systemically
> hard (installation, different OS versions/platform, dependencies, build environment) that everyone suffers with
To add to this point, I would love to see the commits that do resolve these issues. I could learn from them too.
Maybe that "dirty laundry", in this case, is security-through-obscurity (maybe even with a TODO: this is insecure, fix it) or hardcoded values such as keys or ids that might compromise stability or security.
E.g. I normally open source everything from day one, but keep my Ansible (and before Chef) stuff behind closed doors: it's full of commits that would compromise security. E.g. "quick hardcoded list of private IP-addresses that can access the reporting Database until we have the VPC coupled to a VPN."
There is this immense pressure if your code isn’t perfect it will have negative side effects for your career when open sourcing it. So understandably people choose not to.
On the contrary I like it, for it makes me more confident about the general quality of the work, even though I understand that it can increase cycle time.
If I understand well, when you will publish it, the first merge requests you will receive will be improvements of your build system. People will compete to provide the best fix for this obvious deffect. As a result, the main brake for publication will disappear quickly.
> People will compete to provide the best fix for this obvious deffect.
This may be part of the concern. Sometimes for a personal project you don't want that sort of input on the essentials. It is your thing, you want to do it right yourself either through pride or because you feel you'll learn more/better that way.
In those cases, while being perfectly willing to share the results by releasing the source at a particular point they might not want to be "distracted" at this stage by the sort of input you describe.
Not releasing immediately also delays the potentially discombobulating license choice issue, if this is your first such project and you have not given that much thought previously.
There is also plain ol' technical debt: Code that is overengeneered, unstructured or hacky because, when you wrote it, you were still exploring the problem space.
Now that you have a better idea of what problem to solve and how to solve it, you can restructure the code to be more concise and overall more efficient in reaching your goal.
I think it's advisable to clean up your code like that before publishing, just like it's advisable to clean up your git commits before pushing a merge request.
Actually just don't release, period. Point to a repository and be done with it.
Anyone who expects support setting up Visual Studio 6 or the like to use this is either knowing what they are doing (so they won't actually ask inane questions but rather useful ones) or not knowing what they are doing (in which case answering their questions is probably not a useful use of your time anyway).
The fact they're trying to set up a website or something hints that either they're in there for the fame (so they have a "surprise the people" mind-framework which makes them want to do releases and hide the source for as much as possible), , that they already know someone who may want to spend big $ on this, or that it's just their first project and are still a bit truly ashamed/naïve.
I have seen both happen many times and it always end with no source ever released, or the source released way too late for it to make any impact.
> I don't think anyone wants to go back to the bear skins and stone knives of centralized version control they can't effectively use from a modern system.
When working on a small team with everyone in the same office, there was nothing wrong with cvs, rcs, SourceSafe, Subversion, etc.
From my perspective SVN merges were absolute ape show even with as small as 5 people in the team.
Internet down - cannot do anything because you cannot even commit stuff with subversion. Let alone some totally obscure version control software I was working with as well.
Yes it was better than making copies of source code with date/time.
What is my killer feature for GIT: that I can work on my local repo and I can do whatever I want there. Only when I have to share the code I have to cleanup commits/code.
In general I probably could do the local repo with SVN and moving changes between them would be more hassle than worth probably. Also as young dev I did not thought about that until I saw workflows in GIT and it blew my mind.
> Internet down - cannot do anything because you cannot even commit stuff with subversion.
That, in itself, I consider a poor argument. The obvious solution would be to ensure Internet does not go down.
However, as an important design precondition, it forces the builder of such a system to embrace asynchronous, eventual consistency etc. Which leads to a much better design, even if your internet is 99.9999% available.
I'm convinced the "offline" requirements are the reason merging, rebasing, etc are so well done in Git.
No, my argument is that "ensureing connectivity" is a much simpler, cheaper and easier solution than "build all tools so they can handle offline".
Or: fine: git can be down. Now, how to read the framwork/api/lib documentation. Need to ask a colleague where that key for the CI was again: "build offlinefirst messaging". Need that backtrace from the CI the last time it ran "build some auto-asset-synching with the CI to local" and so on.
If you think that is a valid line of solving things, then something is wrong with the way you are solving problems.
No, my argument is that "ensureing connectivity" is a much simpler, cheaper and easier solution than "build all tools so they can handle offline".
The only problem with this is that ensuring connectivity is impossible. You can have multiple redundant backups with different technologies, and there's always a non-zero possibility that all of them will fail at once. This is compounded by factors like connectivity on the other end - how can you ensure connectivity when someone is working from a hotel for a conference, or from home, or from their yacht? Having a centralized repo means you also need people to work from their office if you're not going to control their connectivity too.
When it comes to source control, something that lets developers carry on working when they don't have access to a central repo is massively better than everything else.
Now, how to read the framwork/api/lib documentation.
It's in the repo, so ... just read it like normal because you have a local copy?
"Well done" might be overstating things... they're better than other tools alright, but if I only had a dollar for every single time I saw a poor git diff. Forget language-aware diffing; the line-level diffing doesn't even seem to handle even the dumbest cases of indentation and brace-matching with any intelligence when merging. They have a lot of low-hanging fruit for improvement.
When I needed to work with a subversion the last time, I always found myself using git and git-svn when interacting with the version control system.
It's not just the familiarity of the commands, but also the enormous flexibility you get in doing local operations to create and amend your commit history. And the speed of operations of course.
Yeah, i've worked with SVN projects both in office and in open source projects and i use it for a couple of my own projects where i have a lot of binary files (the common alternative to this with git is something like git-lfs but that is essentially sticking a cvcs into a dvcs while pretending you are only using the dvcs... and when it comes to dvcs i prefer fossil anyway) and i never really saw anything wrong with SVN. I might have been lucky, but that is a luck that goes back decades :-P
It's a classic case of a system designed for the largest scales being adopted by 2-person teams because it's "hip to be just like the big boys".
Git was intended for distributed development amongst literally tens of thousands if not hundreds of thousands of programmers.
Centralised SCMs worked just fine for typical dev teams of 2-15 people.
I've rarely seen a team that actually utilises even 10% of Git's features. Conversely, when they accidentally trip over the inherent complexity of a distributed SCM they always cause a giant mess and end up "fixing" it in bad ways.
Eh, Git works, which is all it has to do. The reason you see typical small dev teams use Git is because they're familiar enough with it from dealing with FOSS. And when you are doing FOSS, there's almost no contest – Git destroys the competition for n-person teams when n → ∞.
Besides, it's not even all that complex, there's just an abundance of bad information talking about "diffs" and "branches containing commits" and confusion over the word 'tracking', when most of that is either a convenient representation or an internal implementation detail that doesn't matter at all to the user.
OMG. This was one of my stumbling blocks in learning git. None of the explanations were helpful to me.
The other area that is a tremendous source of confusion is rebasing being described as "re-writing" history. There is no re-writing of anything in git during a re-base. There is some moving of labels (branches), which you might call "re-labeling" but all the commits that were there before a rebase are still there after in the git repository as well as a bunch of new commits.
The fact that all git operations are local (will except push, pull, fetch) and thus mistakes do not enter the (remote) repo so often, that’s really a great advantage.
- They are not "associative", as mathematicians say, which means that if a remote branch has two commits A, followed by B, merging A then B might give you a different result from merging just B. Stockholm syndrome has convinced many programmers that this isn't a problem, but it actually prevents the very sane reviewing process of reading individual proposed changes along a branch. In other words, this is one of the main reasons Git users have to follow "flows".
- Conflicts! 3-way merge treats conflicts as a failure to merge, and tools based on 3-way merge (Git, SVN, Mercurial, Fossil, CVS…) stop everything when a conflict happens. Now, remember that these tools are based on snapshots, and snapshots can never have conflicts. Therefore, conflict resolutions are forgotten, the only thing remembered is that "Commit [insert SHA1-hash] (Revision number [insert integer] in SVN) comes after these two parents".
This is so wrong that Git even has a command to "pseudo-fix" it, called `git rerere`.
Again, if this doesn't sound like a problem to you, this could be because Stockholm syndrome taught you that the tool gets upset if you merge from the same remote branch twice, or if you fork a branch that isn't called "master" and isn't on GitHub. In other words, because you're actually using a centralised version control tool ;-)
Few centralized source control systems ever adopted the "three-way merge" which is the big thing that makes git merges seem relatively magic in comparison. It was an evolutionary pressure thing: not implementing it in a centralized system was a local maxima, when centralized it is easy enough to constrain merges to easier to reason with "two-way merges", and implementing "three-way merges" was a lot of work for little gain (it didn't sell enough of the centralized tools you never heard of or no longer remember that did invest into it). Meanwhile when assuming merges are distributed and entirely offline-first decentralized source control systems had to invest into the "three-way merge" problem and at this point git has invested more than just about any other source control system this side of "academia" (and source control systems you don't hear much about these days like darcs/pijul).
git makes merges much easier through a lot of hard work on its part because it has to in order to survive. A lot of centralized systems were really bad at merges but we put up with it because it mostly worked and it was "good enough".
Svn does do three-way merges, see the section "External diff" in the svn book.
One thing I found in svn but couldn't find in git is merge tracking: you can merge individual commits and svn records which commits have been merged. The equivalent operation in git would be cherry-picking, and I believe git does not track metadata on cherry-picked commits.
Example: I have a branch A, and I've branched B off from A. B has extra commits x, y, z (in that order). I merge commit y into branch A. (In git, I would have to cherry-pick that commit.) Then I merge the whole branch into A. Now svn knows that y has already been merged, so it only tries x and z. But git (AFAIK!) attempts to apply all three changes, and then detects based on the file content that change y was already present.
In svn, due to merge tracking, I can merge commit y into branch A and also make changes to the code in the process, and svn still knows to skip y on the final "full" merge.
git plumbing I believe can represent such merge DAGs (though possibly not with guaranteed semantics and in the category of "not well enough defined if you attempt it") but I don't think there's ever been a porcelain command in existence that can build/execute such merges. It's long been a complaint I have that cherry-pick is too rebase oriented. As someone who prefers "rebase never" approaches to git it irritates me that there isn't a non-rebase cherry-pick.
Probably going even further off topic, in general I find it irritating with git that there's so much vocal "the DAG is aesthetically displeasing when used as a DAG" support that it seems tough for git to realize the power of its own DAG and/or better encode things that should be encoded in the DAG such as partial merges (versus today just throwing away partial merge information in rebases and cherry-picks) just because it "looks ugly" when actually used as such. Aesthetics should be the last concern over beneficial behavior? Aesthetics can be bolted on with cleaner user interfaces but lost data is always lost data?
The DAG is labelled. Most labels are boringly "parent: commithash". In the raw plumbing you can build other labels, a "cherry-picks: commithash" or a "replaces: commithash" or all sorts of things. git log won't follow such edges today, but it doesn't need to, and a lot of the folks that don't like the DAG aesthetically might prefer that anyway, but the assumptions that "parent: commithash" are the "only" edges in the DAG aren't baked in as much as people assume. For instance, think about git notes, they can have some pretty exotic edges. Again, not many people use that power today and most of the git "porcelain" has no idea what to do with it, but the DAG as a raw data-structure absolutely supports it.
Very interesting. Thank you so much for teaching me. I guess it would be possible to create a git plugin that tracks such picked cherries and that provides changed merge and rebase commands. (The plugin would use the existing plumbing to provide new porcelain.)
My current team relies on manual tracking: if there is a bug that needs to be fixed on a release branch and on the main branch, then the developer is responsible for adding commits to both branches. (Whether that is by making the commit on one branch and cherry-picking it to another, or by just making the change twice, is up to each developer.). I'm glad that everyone is so detail-oriented; normally we don't forget. But better support for cherry-picking would be quite useful imho.
> (and source control systems you don't hear much about these days like darcs/pijul)
A bit unfair, Darcs is indeed less fashionable nowadays because of performance issues on very large instances (which few projects actually have), Pijul solves these problems but is still alpha. In other words, this lineage of version control is between two different tools.
It's been maybe 10 years since I last used SVN so I don't remember exactly, but it was related to moving commits between trunk and maintenance/feature branches and back. I managed to mess up whole repository few times. It might have been my own ignorance though, I was a fresh junior at the time.
On the other hand with git I had less issues related to merging, after playing for couple of hour it just all made more sense.
Git has become so ubiquitous that I'd find it weird if a recent project was using anything else.
Even if most devs don't know even 10% of it (I'm in this category and I don't care), same goes with my car: It can drive very fast but I limit its speed to something I can control.
The analogy is a bit weak but in general I see no problem not using something to it's full potential.
> It's a classic case of a system designed for the largest scales being adopted by 2-person teams because it's "hip to be just like the big boys".
People are still saying this? I remember hearing this around 2007 when I started to use git. It's completely wrong. How would I, as a single developer alone, even use CVS or SVN? Have my own server running locally? I tried that and it sucked. Maybe there's some other tool I could use instead, well now I have to learn two tools. Git is popular because it works at all scales.
I worked with SVN briefly as it was out of fashion by the time I entered the workforce, but one thing I remembered about SVN was that there was a lack of standardisation between the clients.
I created the project using tortoiseSVN and any time a team member tried to add to it using a different client than tortoiseSVN it'd corrupt the entire repository!
That doesn't sound right. Actual corruption of the repository can happen, but not by simply using another client than your team members.
May be you mean something more alike commiting non-standard line endings? That can easily happen in git too, e.g. when the .gitattributes is different between worktrees.
May have been the case, it was about a decade ago now!
Somehow tortoiseSVN solved those issues, perhaps they had some smarts to normalise the line endings?
In any event, I've never had a git repo become corrupted in such a way after many years of continuous usage, but SVN seemed to fall over in a light breeze.
One point to add to the XKCD - driving an automatic transmission car is "steering wheel, gas pedal, break".
Most people don't even realize that there are micro explosions there around 2k times per minute let alone rest of the magic that happens in a modern car.
No one reads car manual and probably no one uses most of options in their cars. But still it is such a useful tool that many cannot live without one just like nowadays devs cannot live without GIT.
So I think it is perfectly fine to use only couple commands and don't even think about distributed graph and then adjust/learn when you need to understand more.
While I agree with your main point, as someone who used SourceSafe in the past I strongly dispute the claim there was nothing wrong with it, relative to the others that you mention!
Honest question: such as what? Some enormous projects were written with CVS (concurrent version control) including, IIRC, Unix system V and some Jet Propulsion Lab projects
The same things that are wrong with them when working in a team. I want to do experiments in separate branches, sync them across my desktop and laptop, merge them painlessly, merge various bug fixes to those branches and go through history without having to go make a cup of coffee while I'm waiting. And most importantly, I don't want those things to be any more pain in the ass than they need to be - and they are more painful in CVS, Subversion and SourceSafe.
Of course complex software can be written without Git - I've been writing software since before Git existed. It can also be (and has been) written without, say, lexical scoping. But it's just harder.
Out of curiosity, the other day I wrote a script that goes through each git commit in a repository and calculates how many lines of code there are, so I can see how it changed over time. That'd be fun in SVN.
I'm not a native speaker. The article regularly mentions 'the Git porcelain', or just 'the porcelain', what is meant by this? Do they mean it's fragile or something?
It’s jargon. There are two layers of cli commands in Git. One - user friendly like “git commit”, that does abstract things like commits, diffs, etc. It is called “porcelain”. Another level is low level commands. They are used by “porcelain” level commands to actually manipulate Git’s database. They allow to create objects, change objects, cause re-packing, manipulate refs (branches and tags but more than just that), manipulate object types and look inside objects.
Git is split into two parts: the plumbing, and the porcelain. It's a toilet metaphor. You don't directly work with the plumbing unless you really know what you're doing.
Porcelain bathware (sinks, baths, toilets) and sewer plumbing is a perfect metaphor for the distinction between UX and underlying infrastructure. Despite the simplicity to which they are connected together, one is a near-universal experience and the other is a massive layer cake of specialist engineering that very few laypeople understand.
It's called git, and Linus knew exactly what he was doing when he named it. I doubt he cares a jot if the plumbing/porcelain metaphor icks you out a bit.
And even more meaningfully for tool authors, the plumbing is very long-lived and general-purpose, with basic objectives like not leaking and not breaking down, while the porcelain is rather easily replaceable, possibly specialized, "smaller" but more complex than plumbing, with meaningful user interface (in Git's case, specifically, designing commands that are unlikely to do harm, easy to understand and efficient after they have been learned) and significant dependence on tastes and environments.
No, plumbing isn't a bad thing. As I generic term I might use it myself. The porcellain part though moves it clearly to the toilet topic, and that part I find unfitting. And the original poster just didn't understand what was meant with the differentiation between the porcellain and the plumbing. Which caused me to comment that software authors should be a bit more carful when naming things. What starts as a inside joke might not be well understandable by outsiders and off-putting in the extreme cases. After all, naming is about being understood.
That is one thing authors of open source software should learn: while it might be fun to have clever insider terminology, when publishing, come up with terms which are unabigous to understand. Also, don't use terms which might be off-putting or even offensive to some.
The term "plumbing" is so common of a term in the US (maybe Britain also) to refer to "inner workings" of anything at that to be offended by it implies manufactured outrage with dubious motives.
This guy is sharing his work of his own free will and deserves respect and owes no one anything at all.
It is more the explicit reference to toilets, which also drags the term plumbing into it, as I cannot think of something else than sewage in this context. I don't find the term plumbing by itself off-putting.
And it certainly isn't about being offended by "other people's culture". I talked to enough americans who shared my sentiment. And as we see here in the discussion, it just wasn't understood. That was the other part of what I wrote: even if the term might be completely innocent, it might be difficult to understand for outsiders.
It is not, that I haven't been guilty myself of bad naming in my projects. It can be fun or even convenient to come up with "insider" terms. But as soon as you have to communicate with outsiders to the project, the tend to bite you, even if there is no offensiveness about it.
Naming is one of the hard problems in software engineering :)
"That is one thing news aggregator commenters should learn: while it might be fun to just put one's ignorance on display for the world to see, when publishing, come up with a modicum of curiosity and willingness to learn about and engage with for the craft and its culture that one is part of."
Sorry, I have difficulties parsing your statement. While the term plumbing might be understandable by itself, porcellain certainly isn't, until you are explained the metaphor of porcellain and plumbing in this context. It just isn't efficient to come up with new terms for things which have to explained to any newcome to the project. As I wrote in another post, it is nothing I haven't been guilty of myself, but am trying to avoid.
The underlying idea is: as a hacker, and as a first step, the onus is on you to come up with the explanation‡ yourself, not expect the burden of education on somebody else's back to be carried towards you. Same goes for other (potential) git users in a similar situation. It is okay to disagree with this, but then one would mark oneself as a member of the out-group. HTH; HAND
People are so entitled that they demand 1000s of hours of free (OSS) dev and aesthetics that please them.
If the name is off-putting to you, you're free not to use it. If lots of people agree with you (that the off-putting name is more significant than the contribution) then the market will have spoken and the tool will die.
But if it doesn't die then you should probably reconsider your hang-up.
You completely misunderstood the intention of my posting. I feel no entitlement to use git. My posting was meant as a comment about practises in programming. Being a long-term developer, I know about all the temptations about funny names, insider jokes, sometimes inappropriate humor. And yes, they are can be fun. But at some point it might be a good idea not to expose all of this to the public. That it is a good idea to try to use appropriate terms for things. Makes it easier to understand.
And you completely misunderstood the intention of my response: people should try to look beyond their own fragile sensibilities the contribution rather than the aesthetics. Barring that they should just pass (instead of criticizing).
The guy came up with git over a weekend. The same guy wrote an operating system and named it after himself. In-jokes and easter eggs are famous in software engineering.
Linus is the original author of two foundational pieces of software technology in use today. I think he can name things whatever he wants.
If you're offended by the terminology of plumbing and porcelain because you can only think of toilets, then I suggest you look around your bathroom and consider what the basins are made of, or the bidet, or some bathtubs.
It's a perfectly valid extension of the "plumbing" term meaning the underlying infrastructure, while porcelain is the way users interact with the system.
When I do "man git" the first two lines are:
NAME
git - the stupid content tracker
Further down in that manpage is:
GIT COMMANDS
We divide Git into high level ("porcelain") commands and low level ("plumbing") commands.
Communication is dependent on context. If two people do not share any context, nothing will be unambiguous. As sender it's impossible to know the context off all recipients beforehand.
Without any context, communication is difficult indeed. But the context here is general IT, a lot of context is established there and can be anticipated by any "sender".
They'll have to pry Sublime Text from my cold, dead fingers. I want to like Merge, but I find it's UI to be ugly and difficult. I'm still using Tower, but just got soaked for a very expensive upgrade to Kaleidoscope. UI aside, does Merge have a good diff tool built in?
tl;dr: this is a MSCCI plug-in for git, for compatibility with (quite) old versions of Visual Studio.
This is impressive work. I've used the MSCCI API on several projects - MSCCI is the old version control interface for Visual Studio.
MSCCI is the "Microsoft Source Code Control Interface" and was built to integrate Microsoft Visual SourceSafe into Visual Studio. This was because VSS had a checkout/edit/checkin model by default, so files needed to be explicitly checked out from the server (and were locked) so that they could be modified. MSCCI allowed you to paper over this - as soon as you started typing in the editor, you'd get a MSCCI event and you could do the check out.
Unfortunately, the API is really focused on VSS integration. Trying to bring another version control system into this API is... painful. This was - always - the worst part of the UI to work on in SourceOffSite and Vault.
Thankfully, more modern versions of Visual Studio dropped MSCCI (by "more modern" I mean 2005 - adding Team Foundation Server integration to Visual Studio meant that MSCCI was finally deprecated and a new API was added).
However, legacy development projects exist for older Microsoft platforms and using older Visual Studio versions. So I'm both surprised to see that this exists, and am rather heartened by its existence. (But I still feel sorry for anybody who has to use it.)
I went through the lessons at https://learngitbranching.js.org/ when I was first learning git. Afterwards, I went looking for a program that would create a similar visual representation of the branches... and I'm still looking.
There's a big listing of gui clients on the git website here: https://git-scm.com/downloads/guis. One obstacle I didn't expect to face is that many don't have Linux versions. GitUp is probably the closest to what I'm looking for, but it's Mac only. A windows-only program, gmaster, also caught my eye.
For the most part, I've used GitVine and Ungit. I've found both helpful (I don't know how people keep track of the "paths" of their repositories without seeing them), but I've run into issues with how the commits are spaced out when the trees are rendered on both programs. GitVine occasionally criss-crosses the "parent" pointer arrows as though they were overlapping, which is really confusing.
Ungit hasn't overlapped them, yet, but it spaces the commits kind of strangely. HEAD will be right at the top, but then I'll have to scroll an entire page to reach the parent. Meanwhile, a nearby branch will be shot diagonally way out to the far right (I think this happens because there are branches a few screens down that are "occupying" that horizontal space). I've found zooming in and out to be the better way to travel around.
I've worked with the Git CLI until I discovered GitKraken. Haven't touched the CLI since. It's one of the few software products I pay for without any regrets.
I hate seeing this. We all write some ugly hacks. Just release it and work on it in public, it's fine.