Nice writeup -- had been wondering about how it compares to Git (and any killer features) from the perspective of someone who has used it for a while. Conflict markers seems like the biggest one to me -- rebase workflows between superficially divergent branches has always had sharp edges in Git. It's never impossible, but it's enough of a pain that I have wondered if there's a better way.
For me, it's not so much that jj has any specific killer feature, it's that it has a fewer number of more orthogonal primitives. This means I can do more, with less. Simpler, but also more powerful, somehow.
In some ways, this means jj has less features than git. For example, jj doesn't have an index. But that's not because you can't do what the index does for you, the index is just a commit, like any other. This means you can bring to bear your full set of tools for working on commits to the index, rather than it being its own special feature. It also means that commands don't need certain "and do this to the index" flags, making them simpler overall, while retaining the same power.
You can make a critique of the flimsier parts of anti-DEI fashions with substance but I don't think this is one of them. There are so many questionable assumptions and strawmen layered into the end conclusion that it's hard to tell whether the objection is with morality or taste.
For example:
- Responsible people have responsibilities -- what is this tautology supposed to mean? That they can no longer value vitality or views on what style of life makes it worth living?
- The internet is no longer the world's great frontier -- according to whom? It is certainly the case that the old frontiers of the internet have either imploded or been thoroughly domesticated, but the internet is now thousands or millions of times larger than in its early days, so it's hard to say that frontiers (plural) are gone rather than one is simply not trying hard enough to find them.
- Chance is a great factor in success - a belief to be sure but not an observation; chance is often a factor in success, but it's not often a factor in downward mobility (which is extraordinarily common across the classes). It's easy to blame chance for negative outcomes, but it's hard to fully test and regularly exercise (or gradually expand) the limits of what is in one's control.
- Long gone are the days of the solo "great mover" - how does this couch with solopreneurs vibe coding their way with gen AI to large independent businesses and lean organizations that have very quickly built the highest revenue/employee at the highest growth rates in history? If anything, individuals are inordinately empowered with tools today that didn't actually exist in usable form 3-6 months ago nevermind a year ago.
For a piece of writing that talks so confidently about how stuck in the past others are, it's hard not to question whether the author themself is stuck in the past in some way.
Putting aside the conclusions about alternatives (which I can't say are necessarily persuasive given externalities they would cause), I think the methodology could be improved here. In particular, why choose 36 million as the divisor? A more reasonable divisor would be 340M, or the total US population. At least that way, you could get to tax burden per resident, which feels like a far more useful unit quantity.
To take it even further, you could group current annual revenue sources by individual/business/transaction based revenue/taxes and then determine unit burden for the different kinds of units. I think that would lead to a more illuminating analysis and set a framework for a potentially more revelatory discovery process.
There's an old Joel Spolsky post that's evergreen about this strategy -- "commoditize your complement" [1]. I think it's done for the same reason Meta has made llama reasonably open -- making it open ensures that a proprietary monopoly over AI doesn't threaten your business model, which is noteworthy when your business model might include aggregating tons of UGC and monetizing engagement over it. True, you may not be able to run the only "walled garden" around it anymore, but at least someone else can't raid your walled garden to make a new one that you can't resell anymore. That's the simplest strategic rationale I could give for it, but I can imagine deeper layers going beyond that.
I think GP's point is that this says as much about the interview design and interviewer skill as it does about the candidate's tools.
If you do a rote interview that's easy to game with AI, it will certainly be harder to detect them cheating.
If you have an effective and well designed open ended interview that's more collaborative, you get a lot more signal to filter the wheat from the chaff.
> If you have an effective and well designed open ended interview that's more collaborative, you get a lot more signal to filter the wheat from the chaff.
I understood their point but my point is a direct opposition to theirs, that at some point with AI advances this will essentially become impossible. You can make it as open ended as you want but if AI continues to improve, the human interviewee can simply act as a ventriloquist dummy for the AI and get the job. Stated another way, what kind of "effective and well designed open ended interview" can you make that would not succumb to this problem?
Yes, that's eventually what will happen, but it becomes quite expensive, especially for smaller companies, and well, they might not even have an office to conduct the interview in if they're a remote company. It's simply best to hire slow and fire fast, you save more money that way over bringing in every viable candidate to an in-person interview.
If you're a small company you can't afford to fire people. The cost in lost productivity is immense, so termination is a last resort.
Likewise with hiring; at a small company you're looking to fill an immediate need and are losing money every day the role isn't filled. You wouldn't bring in every viable candidate, you'd bring in the first viable candidate.
FAANG hiring practices assume a budget far past any exit point in your mind.
They'd check their network for a seed engineer who can recognize talented people by talking to them.
To put the whole concern in a nutshell:
If AI was good enough to fool a seasoned engineer in an interview, that engineer would already be using the AI themselves for work and not need to hire an actual body.
My POV comes from someone who's indexed on what works for gauging technical signal at startups, so take it for what it's worth. But a lot of what I gauge for is a blend of not just technical capability, but the ability to translate that into prudent decisions with product instincts around business outcomes. AI is getting better at solving technical problems it's seen before in a black box, but it struggles to tailor that to any kind of context you give it to pre-existing constraints around user behavior, existing infrastructure/architecture, business domain and resource constraints.
To be fair, many humans do too, but many promising candidates even at the mid-level band of experience who thrive at organizations I've approved them into are able to eventually get to a good enough balance of many tradeoffs (technical and otherwise) with a pretty clean and compact amount of back and forth that demonstrates thoughtfulness, curiosity and efficacy.
If someone can get to that level of capability in a technical interviewing process using AI without it being noticeable, I'd be really excited about the world. I'm not holding my breath for that, though (and having done LOTS of interviews over the past few quarters, it would be a great problem to have).
My solution, if I were to have the luxury of having that problem, would be a pretty blunt instrument -- I'd instead change my process to actually have AI use of tools be part of the interviewing process -- I'd give them a problem to solve, a tuned in-house AI to use in solving the problem, and have their ability to prompt it well, integrate its results, and pressure check its assumptions (and correct its mistakes or artifacts) be part of the interview itself. I'd press to see how creatively they used the tool -- did they figure out a clever way to use it for leverage that I wouldn't have considered before? Extra points for that. Can they use it fluidly and in the heat of a back and forth of an architectural or prototyping session as an extension of how they problem solve? That will likely become a material precondition of being a senior engineer in the future.
I think we're still a few quarters (to a few years) away from that, but it will be an exciting place to get to. But ultimately, whether they're using a tool or not, it's an augment to how they solve problems and not a replacement. If it ever gets to be the latter, I wouldn't worry too much -- you probably won't need to do much hiring because then you'll truly be able to use agentic AI to pre-empt the need for it! But something tells me that day (which people keep telling me will come) will never actually come, and we will always need good engineers as thought partners, and instead it will just raise the bar and differentiation between truly excellent engineers and middle of the pack ones.
People don't really call the police, nor sue over this. But they can, and have in the past.
If it gets bad, look for people starting to seek legal recourse.
People aren't developers with 5 years experience, if all they can do is copy and paste. Anyone fraudulently claiming so is a scam artist, a liar, and deserves jail time.
So you create an interview process that can only be passed by a skilled dev, including them signing a doc saying the code is entirely their work, only referencing a language manual/manpages.
And if they show up to work incapable of doing the same, it's time to call the cops.
That's probably the only way to deal with scam artists and scum, going forward.
Can you cite case law around where some one misrepresented their capabilities in a job interview and were criminally prosecuted? Like what criminal statute specifically was charged? You won’t find it, because at worst this would fall under a contract dispute and hence civil law. Screeching “fraud is a crime” hysterically serves no one.
Fraud can be described as deceit to profit in some way. You may note the rigidity of the process above, where I indicated a defined set of conditions.
It costs employers money to on board someone, not just in pay, but in other employees training that person. Obviously the case must be clear cut, but I've personally hired someone who clearly cheated during the remote phone interview, and literally couldn't even code a function in any language in person.
There are people with absolutely no background as a coder, applying to jobs with 5 years experience, then fraudulently misrepresenting the work of others at their own, to get the job.
That's fraud.
As I said, it's not being prosecuted as such now. But if this keeps up?
> People aren't developers with 5 years experience, if all they can do is copy and paste. Anyone fraudulently claiming so is a scam artist, a liar, and deserves jail time.
I won't name names, but there are a lot of Consulting companies that feed off Government contracts that are literally this.
"Experience" means a little or a lot, depending on your background. I've met plenty of people with "years of experience" that are objectively terrible programmers.
I think it's lazy and superstitious to make an analogy between practices in hygiene in medicine and software engineering.
But if you want to draw your analogy to its absurd conclusion, I think a lot of people would hate the idea of needing to wash their hands to do a surgery remote controlled by a joystick, because the sterility of the instrument doing the operation is independent of the sterility of the hands remote controlling it from afar.
Doctors found out that washing their hands caused less people to die, and it's a bit of a nuisance but totally worth it.
Software devs have not found out that anyone dies unless we do strict tdd. But we have found out that unless we have a good amount of automated tests, the software becomes brittle and hard to change.
TDD is more of a style that some swear by that has alleged properties making everything better, like the way doctors wash hands in a peculiar way. But just "washing the hands" thoroughly would probably also help people not dying.
I think SOM is more of a narrative tool that helps you explain part of your market approach rather than a binary classifier that either leads to or away from fundability. Specifically in healthcare, there's a great passage [1] in an article by Steve Kraus, who's one of the most well regarded startup healthcare investors that I think has stood the test of time and at least expresses part of Bessemer's thinking on this pretty insightfully:
```
Healthcare represents approximately 18% of US GDP or, in other terms, $3.6 trillion of annual spending, and continues to grow by low, single-digit percentages each year. Despite the impressive scale of the US healthcare industry, not a single healthcare company has a $3.6T total addressable market (TAM).
Instead, healthcare looks a lot more like several thousand billion-dollar markets that include everything from healthcare technology, commercial and government-sponsored care to drugs, medical equipment, home health services, out of pocket costs, and more, all of which together comprise the $3.6T industry figure.
DX7 will always have a special place in my heart being the spiritual predecessor to my favorite soft synth, FM8. If you want a modern FM synthesis experience, FM8 has been that for over a decade. Low CPU usage, incredibly versatile, really easy to work with in the mix (FM synthesis tends to be easy to shape and layer spectrally).
reply