Chad's a super cool guy. I met him at a python conference in Nashville and he was using this big cutter to punch little heart shaped holes in pennies to promote his service gittip (later Gratipay). I had mine for years until it was stolen :/ Glad he's still out there doing good stuff! Chad if you read this you really inspired me that day the way you were so cool with random strangers and genuinely an attentive listener.
It is amazing how software engineers and AI researchers are automating themselves out of their jobs. Sure, it is all good if the revenue generated from automation is used for the good of all people, and if the AIs are aligned/ friendly. But looking at current indications, none of that is going to happen.
AI is just tech. It will enable things, including automation, just like previous generations of tech.
The dawn of the industrial revolution, in which we leveraged machines that were measured in 'horse power' (!) doing far more work than a man (or horse), didn't 'put us out of business' so to speak. Wages skyrocketed. Towards the start of the 20th century wages were high enough that rich folk couldn't afford to keep regular workers around as staff anymore.
I feel it would be naive to think it will just work itself out through capitalism though, it's worth thinking about what impact it will have and whether or not we need some new policies to help society truly capitalize in social value terms, on automation of everything we can automate. GDP is not a measure of how well your society is doing, and with automation we could skyrocket GDP and absolutely tank social measures of wellbeing.
In other words social reforms around how people earn income, or gain housing, food and necessities, might need to happen as we remove more and more jobs.
I'd like to think that as people are freed up they'll be able to do other jobs, but if we plan to automate everything we possibly can that has to eventually be false. Unless everyone's going to be a creative, and even the commercial side of that is starting to be automated. At my work people are already using AI generated images instead of stock photos, and AI generated jingles instead of stock music.
Automation has been happening since the industrial revolution (or the stone age, depending on how one sees it). If jobs were only removed, humanity would have been jobless long ago. People surely thought exactly the same thoughs a few centuries ago.
OP's statement is more precisely framed that jobs are on-net removed, not that jobs are only removed, which is a little harder to argue against.
But not much. What I think would be an economically sound case is an argument that Ricardian comparative advantage breaks down at the extremes. A robot that can do everything twice as good as you will still benefit from trade; a robot that can do everything a thousand times as good as you might just view you as atoms it could use better in a different configuration.
We're already seeing a blight of pointless jobs existing, does this just mean we all end up working pointless jobs instead of enjoying the fruits of our automation?
On the whole timescale, the same principle applies.
Even if we define the start automation as the industrial revolution, if automation would (mostly) remove "interesting" jobs and compensate with "pointless" jobs, by now, all the jobs would be "pointless"; but they very evidently aren't.
The problem is, as I wrote, that some people think we're a special period in history (most of the times, based on an ambiguos interpretation of the term "robot"), while there have been significant revolutions for a long time (possibly, forever). There's nothing different under the sun (on the whole timescale).
I share your skepticism, however, it would be more dangerous to try to make predictions.
Moreover, I think the worrying is a bit too much: AI isn't replacing any jobs really, mostly enabling.
For the most part, AI is 'improving things' - just as any other bit of software or R&D.
That the AI can do 'a bit better speech to text' aka 85% accuracy instead of the classical 72% is not going to change the world.
Almost all AI will make it's way into things incrementally, unnoticed. Things will just get a big smarter, safer, shinier.
There will be a small number of 'step functions' but likely not to put anyone out of work.
It's better to think of NN's as not even AI, it's a distraction. Imagine if we just called them 'adaptive algorithms' and didn't have the hype. What would we think about it? Without Elon Musk talking about Skynet killing us all?
There are maybe some bits of manual labour here and there that will mean maybe a narrow range of factory workers will be out of luck, but that will be temporary, and there will always be unskilled labour beyond the AI, if we need those kinds of jobs.
The way I see it the only jobs you are realistically automating (without full on AGI) are the CRUD and pipework jobs which are unambiguously explainable without a lot of context.
The difficulty/complexity in delivering software is mostly on having accurate domain knowledge on the problem you are trying to solve and how users or other machines in that environment interact with said software.
So again, unless we hit a landmark of a very outperforming AGI I think that much of the domain knowledge in many software 'niches' will be out of reach of automation.
That is not to say that many sub-tasks/boilerplate cannot be automated.
But you are missing the point that even a robot could code 90% of the boilerplate and leave it to someone with domain knowledge to check it through and possibly tweak a few bits.
Anything that can be automated will be automated. Software engineers are very expensive so the money is there to be made.
So are doctors, accountants etc.
I don't think its feasible but I'm sure people are working towards it. I'd start with healthcare actually because the West is aging rapidly.
Using the website as a primer, GPT-3 comes up with this:
> Give a TOC for a presentation titled: "How Robb Oat uses Robb Oat to build Robb Oat"
1. Introduction
2. What is Robb Oat?
3. How Robb Oat uses Robb Oat to build Robb Oat
4. Conclusion
> Give 4 speaker note bullet points for each point:
1. Introduction
-Robo Oat is a robot software engineer that reads GitHub issues and submits pull requests.
-It is still in development and currently only fixes code snippets that are linked to in the issue.
-We are going to be discussing how Robb Oat uses Robb Oat to build Robb Oat.
-This will include a demonstration of how to use Robb Oat to fix a code snippet.
2. What is Robb Oat?
-Robb Oat is a robot software engineer that reads GitHub issues and submits pull requests.
-It is still in development and currently only fixes code snippets that are linked to in the issue.
3. How Robb Oat uses Robb Oat to build Robb Oat
-Robb Oat uses Robb Oat to build Robb Oat by trying to fix every GitHub Issue it understands.
-It does this by creating a pull request with an attempted fix.
4. Conclusion
-Robb Oat is still in development and is only able to fix code snippets that are linked to in the issue.
-However, it is a useful tool for those who want to quickly fix small code snippets.
-It is also a good
>If, he thought to himself, such a machine is a virtual impossibility, then it must logically be a finite improbability. So all I have to do in order to make one is to work out exactly how improbable it is, feed that figure into the finite improbability generator, give it a fresh cup of really hot tea ... and turn it on!
I just want to appreciate the name, it was clever (at least for my standard, which, admittedly isn't high), and the name was suggested by DALL-E, which have no context on text input
Yeah it blew me away when DALL-E spit that out on the first try. Both the name and the logo are so perfect, and DALL-E cut the time down from hours if not days or weeks to seconds to get there. At the same time I did iterate on it, I made it two words, Robb Oat, instead of one, Robboat, to make it more relatable and easier to figure out how to pronounce. :^)
I coded it up myself and then was like "wait I should be dogfooding this" so I did and Robb Oat spit out basically what I had just written. That was the aha moment for me.
Also I've been reading Innovator's Dilemma and basically all disruptive technology sucks at first so I haven't been too discouraged about it.
True! It is not disruptive just because it sucks. In fact, it is not disruptive at all yet—what has it disrupted? Nothing. :^) What I said, though, is that I'm not _discouraged_ at its limited utility at this point in the game. It's to be expected. Everything, as you say, sucks at first.
In terms of Innovators Dilemma: Robb Oat is not useful to existing "customers" of software engineers (product managers, engineering managers, lead engineers). But because of its drastically lower cost and higher speed, it _may_ open up new markets that barely exist right now. If successful in these new, smaller markets, it may eventually move upmarket and displace existing "technology" (i.e., us) once it can compete on productivity. That's the point at which it will have proven itself to be disruptive.
The next step for Robb Oat, if it's to go anywhere, is to find the smaller, newer market where it can be useful. TBD
It's still a win if the person writing them costs less than the person who'd be coding, and/or if it expands the pool of people capable of making changes to the code base.
So now there is a university with Software Engineering degree certified by the country's Engineering organisation, and the respective professional titles exams for robots?
This is a weird thought, but I think I stand a better chance of being able to learn to speak "bot" than I do of ever actually learning to write code, even though code is pretty much literally a type of "speaking 'bot.'"
I've tried a handful of times and just can't get past how unbearably tedious I find writing for loops, so I give up. It's like trying to read the whole Bible and getting bogged down in Leviticus. I'm sure it gets more interesting. I just can't push through that part!
This is my primary reason, after IP abuse, of disdaining Copilot. We’re still going to need engineers, and I’d rather us make our languages and tooling more expressive than turn into a prompt engineer.
It’s not. I’m not saying things are perfect the way they are now and we shouldn’t look for ways to be productive, it’s that I don’t want us to get stuck in a local maximum like our industry tends to do. If Python isn’t expressive enough, let’s evolve on languages, not pile kludgy code generators on top that must be finessed with special prompts.
Sometimes I wonder how good jira would be if developers actually embraced their environment and improved it rather than deal with $100/month plugins that don’t really do much.
A group of developers did this ten years ago, unfortunately they're stil waiting for the jira admins to grant them the right permissions to install the plugins they wrote.
AI self-programming systems are the bottleneck gateway to the technological singularity. The main issues are communicating with humans to get feedback and solving the right problems.
Once this becomes more easily doable, it will likely displace most generic, low-skilled software engineers within a few years. Then there will be other markets such as AI-assisted coding, which is all the rage right now, the trick is making them actually useful.
I wonder if you could use a vector similarity search to find a specific implementation in code based on a feature description. I first thought you would need a language model with a large-enough context window to digest the whole codebase, but after thinking about it a bit longer it seems like the right way to do this would be similarity search. Wonder if it would work at all.
This is cute and all, but I think their software engineers are coding the wrong projects. This (and other recent projects) indicate a lack of strategic direction from GitHub, which is now Microsoft.
Yes, it's bad and confusing, but less so than not doing it, because if you place a 2-day old submission on the front page, the thread fills up with "how is this 2 days old and on the front page?"
Maybe there's a better solution but we don't know yet what it is.
This frequently happens with my submissions; the flammable material doesn't always catch fire with the first strike. And it's probably not worth complaining about.. hear me out.
Reframing it: Are you pleased you were able to accurately predict something that ended up being of interest to a great many people? You deserve to feel good about this! In the end, the thing got shared as intended. This is success.
At least, this is what I've learned to tell myself ;).
HN points are probably the most worthless form of currency, except for maybe crypto.
That sounds like the second chance pool. So it's possible the submitter actually originally submitted it before you! Funny, I never thought about this being a reachable state. You're right, it is confusing.
I email Dang (hn@ycombinator.com) when I'm confused or unclear on site behavior. He's awfully generous to humor me.
Here an example issue: https://github.com/robb-oat/server/issues/7