Akin’s eighth law of spacecraft design says
In nature, the optimum is almost always in the middle somewhere. Distrust assertions that the optimum is at an extreme point.
When I first read this I immediately thought of several examples where theory said that an optima was at an extreme, but experience said otherwise.
Linear programming (LP) says the opposite of Akin’s law. The optimal point for a linear objective function subject to linear constraints is always at an extreme point. The constraints form a many-sided shape—you could think of it something like a diamond—and the optimal point will always be at one of the corners.
Nature is not linear, though it is often approximately linear over some useful range. One way to read Akin’s law is to say that even when something is approximately linear in the middle, there’s enough non-linearity at the extremes to pull the optimal points back from the edges. Or said another way, when you have an optimal value at an extreme point, there may be some unrealistic aspect of your model that pushed the optimal point out to the boundary.
Related post: Data calls the model’s bluff
(shrug) some things are well-modelled with linear objective/constraints, and some things aren’t. This is one place where the art of math joins with the science of math. always good to keep in mind, but not exactly breaking news.
John, I think this observation might in part be due to selection bias: The problems that come to mind as “optimization problems” will tend to be ones where the solution is non-obvious, which means it should be at least plausible that the optimum is interior. Many problems where the optimum is on the boundary are just too boring to think about.
It looks like Atkin’s 13th violates his 8th law:
‘Design is based on requirements. There’s no justification for designing something one bit “better” than the requirements dictate.’
(Asking where the requirements come from makes me think of duality…)
I can understand why Nature seems to favor solutions in the middle and not near extremes. That the extremes are the extremes suggests that beyond them hides some kind of failure mode. Potential solutions near those extremes could easily be nudged into failure by the unexpected perturbations that characterize so many natural environments.
I’ve implemented several precision instruments over the years, and it is common engineering practice to distrust values near the extremes of what the instrument can measure (even when accuracy, precision and resolution are all specified as being “over the entire range”).
When measuring any natural system, an extreme value could also mean your instrument (or analysis) is broken, or at least unable to properly describe the actual value.
I’ve been thinking about this for a bit (every time I see the title in my RSS reader), and I can think of a couple cases that favour extremes. They aren’t mathematical, but there are plenty of reactions that favour the most extreme conditions you can throw at them, as you try and force a reaction to one side. Nature might like pH 7, aqueous conditions, but there are reasons people have invented superacids, ionic liquids, use refluxing sulpheric acid, etc.
Canageek: In that case you may be dealing with a problem that is adequately modeled so that the mathematical and physical extremes coincide. Nice thing about physical problems. Biology and business, areas I more often work in, are messier. :)
I was looking for an old post to show someone and saw this again, and thought of the perfect example: The creation of [NBu4][Au(CN)2] from K[Au(CN)2], or any salt similar exchange. The ideal conditions are when the product is totally insoluble, so it crashes out of solution, thus keeping the equilibrium all the way on the product side. In this case, you do the reaction in water, which [NBu4][Au(CN)2] is (mostly) insoluble in.
A half-joking response: From a philosophical/mathematical point of view, the boundary of a region is a smaller dimension than the interior. So if you choose a continuous function on a closed region at random, the function will have its extremum in the middle almost always. Maybe nature is somehow reflecting this mathematical result. :-)
The guy that brought linear programming also brought us game theory. In game theory, the minimax and maximin solutions are equal at the value of the game. The value of the game is the optima. It is never at the extremes. Exceeding it gets us in trouble. Optimization as maximization is over sold.
Richard Feynman (from “Surely You’re Joking, Mr. Feynman”) on gear choice in mechanical design:
“Second, when you have a gear ratio, say 2 to 1, and you are wondering whether you should make it 10 to 5 or 24 to 12 or 48 to 24, here’s how to decide: You look in the Boston Gear Catalog, and select those gears that are in the middle of the list. The ones at the high end have so many teeth they’re hard to make, if they could make gears with even finer teeth, they’d have made the list go even higher. The gears at the low end of the list have so few teeth they break easy. So the best design uses gears from the middle of the list.”