In multi-disciplinary design optimization (MDO), a similar question comes up: “how many design evaluations are needed to converge on the optimal solution?” Unfortunately, as with money, there is no universal answer. The number of evaluations required depends on the problem you are trying to solve, your approach to solving it, and the starting point for the solution. Let’s unpack this a bit.

When you define an optimization problem in terms of its objectives, constraints and variables, you are also defining a mathematical design space. Visually, this design space resembles a mountainous landscape. But, you don’t actually know what this landscape looks like, which is why you need an optimization algorithm to search it. And it’s difficult to estimate how many steps will be needed to find the lowest valley in a mountain range you’ve never seen.

Further, each optimization algorithm uses its own unique strategy to search the design space. Its effectiveness and efficiency in performing the search depend, in large part, on how well the selected strategy works on the type of space you’ve defined. This is difficult to know ahead of time, because you may not know even some of the basic characteristics of your design space.

Finally, for a given problem statement and search algorithm, the number of design iterations needed to find the optimal solution still depends on the solution starting point, often called the baseline design. It is usually better to start with a baseline design that is closer to the optimal design, but not always. Again, you don’t know where the optimal design is before you start.

Considering all of these unknowns, it is nearly impossible to accurately predict how many evaluations it will take to find the optimal solution to a design problem. If you are like many people, this uncertainty might make you feel uncomfortable at first, or reduce your confidence in finding the solution you seek. In this case, you have two options. Either find a different process, or figure out how to manage the uncertainty.

In an effort to find a process that makes you feel more in control, you may settle for an approach that leads to inferior solutions or requires more work in the long run. For example, you might choose to do a design of experiments (DOE) with a fixed number of samples instead of actually searching the design space. Or, you might decide to use a search algorithm that you understand well, but that may not be appropriate for the current problem.

While these approaches might make you feel better initially, you have just traded one set of uncertainties for another, and perhaps significantly reduced the chances of finding a solution that achieves your true goal.

It is much better, I think, to contend with the uncertainties associated with selecting a number of evaluations to use during optimization. This is not as difficult as it seems.

Typically, the more evaluations you allow an algorithm to use during its search, the better its chance of finding the optimal solution. But, running a very large number of evaluations is seldom an option. Often there isn’t enough time, or computing resources, to perform the number of design evaluations you would need to find

Additionally, there are rules of thumb that help us estimate the number of evaluations needed to find a “good” solution, if not the true optimal. These estimates generally depend on the number of design variables and can be refined if you do know some of the characteristics of your design space. The rules of thumb differ from algorithm to algorithm and are more reliable for those algorithms that work well on a broad range of problems.

Modern optimization algorithms and software also allow you to restart a search from practically any point in a study. So, if you still have some time left and you want to improve the current solution, the search process can easily be restarted to do “

- Login or register to post comments
- Printer-friendly version
- Email to friend

- April 2012 (1)
- March 2012 (2)
- February 2012 (2)
- January 2012 (1)
- September 2011 (2)
- July 2011 (1)
- March 2011 (1)
- February 2011 (2)
- January 2011 (4)
- December 2010 (2)

No related content found.