Hacker News new | past | comments | ask | show | jobs | submit login

Supercomputers (the classical kind) typically don't want to run codes that try lots of combinations; they were designed for, excel at, and cost a lot because of, the need to speed up single runs at a time. This is partly due to history, and partly due to definitions of supercomputers, but every time I proposed runs like this (proteins/drugs) to the supercomputer centers, they told me to bug off because my codes "only scaled to 64 processors" (that's 64 servers, mind you; before SMP was common).

We did what you described using idle cycles at Google (search for "Exacycle") and we got great results doing large scale parameter explorations (either randomly sampled, or sampled based on where the previous sims suggested looking next)- although, nobody actually did material simulations like this, we did proteins.

Realistically, almost nobody does this because it's just not cost effective (the search space is too large, the loss functions aren't accurate enough, and it uses TONS of energy), and more importantly, somebody else is just going to find a way to generate 75% of the results with 25% of the energy, and that person will get published faster.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: