Hacker News new | past | comments | ask | show | jobs | submit login

Ex EE here

> The AI generated circuit was three times the cost and size of the design created by that expert engineer at TI. It is also missing many of the necessary connections.

Exactly what I expected.

Edit: to clarify this is even below the expectations of a junior EE who had a heavy weekend on the vodka.




I read an article on evolutionary algorithm-based designs a long time ago -- they are effectively indecipherable by humans and rely on the imperfections of the very FPGA that they are synthesized on, but work great otherwise.

- https://www.damninteresting.com/on-the-origin-of-circuits/

- https://www.sciencedirect.com/science/article/abs/pii/S03784...


Why do people think inserting an LLM into the mix will make it better than just an evolutionary or reinforcement model applied? Who cares if you can talk to it like a human?


Yeah, when the author was writing about that initial query about delay-per-unit-length, I'm thinking: "This doesn't tell us whether an LLM can apply the concepts, only whether relevant text was included in its training data."

It's a distinction I fear many people will have trouble keeping in-mind, faced with the misleading eloquence of LLM output.


I think you are looking at the term generalizing and memorisation. It have been shown that LLM generalize, what is important to know is if they generalized it or memorized it.


imo, it's the same reason that Grace Hopper designed COBOL to write programs instead of math notation.

What natural language processing does is just make a much smarter (and dumber, in many ways) parser that can make an attempt to infer the intent, as well as be instructed how to recover from mistakes.

Personally I'm a skeptic since I've seen some hilariously bad hallucinations in generated code (and unlike a human engineer who will say "idk but I think this might work" instead of "yessir this is the solution!"). If you have to double check every output manually it's not that much better than learning yourself. However, at least with programming tasks, LLMs are fantastic at giving wrong answers with the right vocabulary - which makes it possible to check and find a solution through authoritative sources and references instead of blindly analyzing a problem or paying a human a lot of money to tell you the answer to your query.

For example, I don't use LLMs to give me answers. I use them to help explore a design space, particularly by giving me the vocabulary to ask better questions. And that's the real value of a conversational model today.


I think you've nailed a subtly — and a major doubt — I've been been trying to articulate about code helpers from LLMs from day one: the difficulty in programming is reducing a natural language problem to (essentially) a proof. I suspect LLM's are great at transferring style between two sentences, but I don't think that's the same as proof generation! I know work is being done I this area, but the results I've seen have been weird. Maybe transferring style won't work for math as easily as it does for spoken language.


It's like a generated image with an eye missing but for circuits. :D


AI proceeds to use 2n3904 as a thyristor.

AI happy as it worked the first 10ns of the cycle.


Every natural Intelligence knows that you need to reach out to a 2N3055 for heavy duty. ;)




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: