I won't deny that in a context with perfect information, a future LLM will most likely produce flawless code. I too believe that is inevitable.
However, in real life work situations, that 'perfect information' prerequisite will be a big hurdle I think. Design can depend on any number of vague agreements and lots of domain specific knowledge, things a senior software architect has only learnt because they've been at the company for a long time. It will be very hard for a LLM to take all the correct decisions without that knowledge.
Sure, if you write down a summary of each and every meeting you've attended for the past 12 months, as well as attach your entire company confluence, into the prompt, perhaps then the LLM can design the right architecture. But is that realistic?
More likely I think the human will do the initial design and specification documents, with the aforementioned things in mind, and then the LLM can do the rest of the coding.
Not because it would have been technically impossible for the LLM to do the code design, but because it would have been practically impossible to craft the correct prompt that would have given the desired result from a blank sheet.
I agree that there’s a lot of ambiguity and tacit information that goes into building code. I wonder if that won’t change directly as a result of wanting to get more value out of agentic AI coders.
> Sure, if you write down a summary of each and every meeting you've attended for the past 12 months, as well as attach your entire company confluence, into the prompt, perhaps then the LLM can design the right architecture. But is that realistic?
I think it is definitely realistic. Zoom and Confluence already have AI integrations. To me it doesn’t seem long before these tools and more become more deeply MCPified, with their data and interfaces made available to the next generation of AI coders. “I’m going to implement function X with this specific approach based on your conversation with Bob last week.”
It strikes me that remote first companies may be at an advantage here as they’re already likely to have written artifacts of decisions and conversations, which can then provide more context to AI assistants.
However, in real life work situations, that 'perfect information' prerequisite will be a big hurdle I think. Design can depend on any number of vague agreements and lots of domain specific knowledge, things a senior software architect has only learnt because they've been at the company for a long time. It will be very hard for a LLM to take all the correct decisions without that knowledge.
Sure, if you write down a summary of each and every meeting you've attended for the past 12 months, as well as attach your entire company confluence, into the prompt, perhaps then the LLM can design the right architecture. But is that realistic?
More likely I think the human will do the initial design and specification documents, with the aforementioned things in mind, and then the LLM can do the rest of the coding.
Not because it would have been technically impossible for the LLM to do the code design, but because it would have been practically impossible to craft the correct prompt that would have given the desired result from a blank sheet.