> If you collect 3 different data points changing each thing one at a time (original design, some number higher, some number lower) whilst keeping everything else constant (usually a good scientific approach) that's 320 possible combinations of changes!
There is an entire field of statistics (Design of Experiments) where one of the first lessons you learn on day one is how one-factor-at-a-time testing is one of the most inefficient ways you can test something. It’s usually only done out of ignorance to better methods by those with little to no formal statistical training.
An experiment designed by someone who is well versed in modern experimental design methods would not take billions of runs to optimize—a sequential design that first screens out factors to those that matter (basic Pareto principle) followed by a response surface design or a GP model surrogate to optimize the response would likely be on the order of hundreds (possibly thousands) of runs. This is basic industrial experimentation—see “Design and Analysis of Experiments” by Douglas C. Montgomery for a nice introductory textbook.
Yes completely agree. This is one of the things that PyBaMM doesn't do for you out of the box. I could have extended the article in many ways to cover all the optimizations you could do both with the physical battery or the model. Thanks for sharing the text book. I think my point which I should have stated more clearly was that maybe for smaller design spaces this might not be a bad approach but with batteries the space is huge. I co-authored a paper on optimally reaching the Pareto front using the and problem as an example actually. may be interesting reading for anyone else coming to this area. https://www.sciencedirect.com/science/article/abs/pii/S03062... Happy to share the pdf with anyone who wants to read the whole thing
Related topic there is this example combining PyBaMM and pints for sensitivity analysis. Which should definitely be done first before delving straight into a DOE https://github.com/pints-team/electrochem_pde_model
Taguchi optimization methods for experimental design are worth looking at:
> "The experimental design proposed by Taguchi involves using orthogonal arrays to organize the parameters affecting the process and the levels at which they should be varies. Instead of having to test all possible combinations like the factorial design, the Taguchi method tests pairs of combinations. This allows for the collection of the necessary data to determine which factors most affect product quality with a minimum amount of experimentation, thus saving time and resources."
Completely unrelated concept: unit testing is part of a regression testing framework while OFAT is a (almost always suboptimal) test strategy for designed experiments.
There is an entire field of statistics (Design of Experiments) where one of the first lessons you learn on day one is how one-factor-at-a-time testing is one of the most inefficient ways you can test something. It’s usually only done out of ignorance to better methods by those with little to no formal statistical training.
An experiment designed by someone who is well versed in modern experimental design methods would not take billions of runs to optimize—a sequential design that first screens out factors to those that matter (basic Pareto principle) followed by a response surface design or a GP model surrogate to optimize the response would likely be on the order of hundreds (possibly thousands) of runs. This is basic industrial experimentation—see “Design and Analysis of Experiments” by Douglas C. Montgomery for a nice introductory textbook.