Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm a software engineer with 7 years of experience.

I've built products used by millions of users, I've built ML models and AI coding tools, and I've been using AI to generate 70% of code that's shipped to production in the last few months.

I do believe that we can replace a large percentage of software engineers with AI in the next 3 years (2025-2027).

Also this 1 million dollar Kaggle competition on AI coding just launched: https://www.kaggle.com/competitions/konwinski-prize



Ah the junior dev starry-eyed mindset. Doing web dev isn't the only thing in the world.

AI will absolutely accelerate development, but we still need so much more modernization across all industries. We still have factories running on Windows NT with software made in the 80s. We have mines, lumberyards, farms, retail, parking, bio labs, etc. So many things need to be upgraded to the same standard as big tech is, but hasn't because it was too expensive.

Much of this AI isn't great at. It's easy to just spit out a web page with reactJS that was done 10,000x times already by millions of devs, but it's not so easy to make sure your smelter doesn't overheat or go cold using 20,000 live sensors.

New devs will have a harder time getting in, but they'll also have an easier time learning. Senior devs will be able to supervise much more.

The industry will change, but it's far from over. If anything, it will grow massively and make other industries much more productive.


I have to clarify that my statement don't apply to software engineers who are not working in tech companies or tech industries.

I usually call them IT instead of tech. Wrote more in my blog post: https://16x.engineer/2022/08/23/it-vs-tech.html

I have no knowledge of other industries that software is not the primary product.


The word "tech industry" is one of the more hilarious terms. Sure, some startup with a website and App, is "technology", but aircrafts, cars, rockets, heavy machinery (which all make heavy use of complex software, with rigorous safety constraints and which all involve ongoing academic research, as well as extremely detailed Design processes) is not "technology".

Truly one of the more grandiose terms people in the software industry use to describe themselves. Same with every single person calling themselves an engineer, something which in many countries would be illegal.


For most industries the tech isn't the primary product.

Google sells ads, not tech Uber sells rides, not tech. Amazon sells a marketplace, not tech. Tesla sells cars, not tech.

Tech is just the tool.


Yes. Those products you listed are not in tech industries by my definition as well. You are right.

Google ads is in ad industry.

Uber is in transport industry.

Amazon is in electronic commerce industry.

I'm glad that we understand that we are talking about different things.


Can you provide an example of a company I would have heard of that is in the tech industry, just so I can get a better understanding of what y'all are talking about?

Is Oracle in the tech industry?

Also on the blog post you linked you seem to imply that you do consider Google to be a tech industry company, so I am very confused.

> In tech industry, software is the main product of the company. It can be sold to a customer (B2C product) or a business (B2B product).

> For example, Facebook is the main product of Meta. Google Search is the main product of Google.


It depends on what you consider as core product for Google.

For some people (consumer), it would be Google Search, it is a piece of software, so in that sense Google is a tech company because its main product is Google Search.

However, for marketers, who use Google Ads, to them they deal with the ads division in Google, and that division's main product is the ads service. So in that indivision, the main product is ad space, not software. And rightfully so Google Ads is not in the tech industry, but ad industry enabled by tech.

For pure tech companies, I would say AWS division in Amazon, Microsoft (Windows, Azure, GitHub divisions), Facebook/Instagram division in Meta (not Ads division).

Then there are a lot of companies that just sell software as a service (SaaS) or just software license, they are millions of them, but to name a few: Figma, Slack, Vercel, Supabase, Docker, OpenAI, Salesforce, Oracle.


Google's main product is ads, not search. Search is also not tech, it's a directory of websites.

Pure tech products is GCP or AWS.

Instagram's business is photos, Facebook is connections. Figma is a design tool, Slack is a way to connect with coworkers.

Most of tech isn't really tech.


Thanks for expounding on your views.

Personally I think Oracle is in the licensing business, but perhaps they also have a "tech industry division," as you say.


I'm a principal SWE with 25 years of experience and I think software today is comically bad and way too hard to use. So I think we can get engineers to write better software with these tools. The talk of "replacement" is going to be premature until we get something remotely resembling AGI. Unless your problems are so simple that a monkey could solve them, AI of today and foreseeable future is not going to solve them end to end. At best it'll fill in the easy parts, which you probably don't want to do anyway. Write a test. Simple refactor. Bang out some simple script to pay down some engineering debt. I've yet to see a system that doesn't crap out in the very beginning on the real problems that I solve on a daily basis. I'm by no means a naysayer - I work in this field and use AI many times daily.


Funny enough, now I write better code than I used to thanks to AI because of two reasons:

- AI naturally writes AI code that is more organized and clean (proper abstraction, no messy code)

- I've recognized that, for AI to write code on an existing codebase, the code has to be clean and organized and make sense, so I tend to do more refactoring to make sure AI can take over them and update them when needed.


>Funny enough, now I write better code than I used to thanks to AI because of two reasons:

I assume you also believe you'll be one of the developers AI doesn't replace.


I'm actively transitioning out of a "software engineer" role to be more open minded on how to coexist with AI while still contributing value.

Prompt engineering, organizing code for AI agents to be more effective, guiding non-technical people to understand how to leverage AI, etc. I'm also building products myself and selling them myself.


Today an AI told me that non-behavioral change in my codebase was going to give us a 10x improvement on our benchmarks.

Frankly, if you were writing code that is worse structured than what GPT or whatever generates today then you are just a mediocre developer.


See, the thing is, to determine which abstractions are "right and proper" you _already need a software engineer_ who knows those kinds of things. Moreover, that engineer needs to ability to read that code, understand it, and plan its evolution over time. He/she also needs to be able to fix the bugs, because there will be bugs.


I think your main thesis is that "AI of today and foreseeable future is not going to solve them end to end."

My belief is that we can't solve them today (agree with you), but we can solve them in foreseeable future (in 3 years).

So it is really a matter of different beliefs. And I don't think we will be able to convince each other to switch belief.

Let's just watch what happens?


I'm with you 100% of the way on this one. Am coding with Claude 3.5 right now using Aider. The future is clear at this point. It won't get worse and there's still so much low hanging fruit. Expertise is still useful to guide it, but we're all product managers now.


There are a lot more photographers now than there ever were painters, and the size of the industry is much larger than it used to be. It is true that our work will change, but personally I think that's great - I don't enjoy the initial hump that you usually have to overcome before you begin to actually solve real problems, and AI is often able to take me over that hump, or fill in things that don't matter. E.g. I'm a backend person but need a frontend for the demo - I'm able to do that on my own now, without spending days figuring out some harebrained web framework and CSS stack - something I probably wouldn't do at all if there wasn't no AI.


Your analogy fails because the economy still needed human workers to take the photographs whereas there is a possibility that in 5 or 10 years, the economy will have no need and no use for most people.


I work in this field and I would bet that in 5-10 years the situation will not be much different compared to today in terms of employment unless we invent AGI all of a sudden, which I don't see any signs that it'd even remotely happen. Job definitions will change a bit, productivity will improve, cost per LOC will drop, more underserved niches will become tractable/profitable.


Well, I know what will happen within about a year long time horizon. As far as at least developer assistance models are concerned the difference at the end of 2025 is not going to be dramatic, same as it was not between the end of '23 and this year, and for the same reasons - you do need symbolic reasoning to generate large chunks of coherent, correct code. I also don't see where "dramatic" would come from after that either unless we get some new physics which lets us run models 10-20x the size in realtime, economically. But even then, until we get true AGI, which we won't get in my remaining lifetime, those systems will be best used as a "bicycle for the mind" rather than "the mind", and in the vast majority of cases they will not be able to replace humans entirely, at least not in software engineering.


> 'proper abstraction'

I assume you're not talking about chatgpt4o, because in my experience it's absolutely dogshit at abstracting code meaningfully. Good at finding design patterns, sure, but if your AI don't understand how to state machine, I'm not sure how I'm supposed to use it.

It's great at writing tests and documentation though.


GPT-4o is at least 1 order of magnitude behind Claude 3.5 Sonnet in coding. I use latter.


Claude is better most of the time on the simpler stuff, but o1 is better on some of the more difficult problems that Claude craps out on. Really $40/mo is not too much to pay for both.


That's the kind of thing I'd love to know more about.

You should join my Discord channel so we can chat more: https://discord.gg/S44tzqHqU4


Also my gut feeling, that about 30% of the code I wrote need some kind of engineering skill and I love to reach these problems. Until I am there, there is just a huge amount of boilerplate and patterns to repeat.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: