Login / Register to comment

Basic Principles of a New Optimization Paradigm

March 15, 2011 by Ron Averill

Change is often viewed as the result of a scientific discovery or the development of a breakthrough technology. But there's usually a lot more to the story. To quote Paul Saffo - technology doesn't drive change, it enables change.

Chain SawWhen the chainsaw was first introduced, I wonder how many lumberjacks tried dragging it back and forth against a tree, expecting it to work the same way as a hand saw. Figuring out the best use of a new technology is just as important, and sometimes as difficult, as developing the technology in the first place.

Too often, the real value of a new technology becomes enslaved by old notions about how things work or what is possible. To realize the full advantages of a new technology, we need to look at things differently and accept new possibilities.

Consider the way design optimization was performed about twenty years ago. Computing power was limited, computer-aided engineering (CAE) models were not very accurate or robust, and optimization algorithms were restricted to simple problems. For most real-world commercial applications, it was virtually impossible to search a broad or complicated design space. So, given those circumstances, a reasonable approach to optimization was established:

  • First, a "good" baseline design was developed using lots of manual iterations. Optimization was considered a finishing tool, not an exploration tool.
  • Second, a design of experiments (DOE) study was performed to find out which parameters have the greatest influence on the baseline design, a process referred to as variable screening. Even though these sensitivity derivatives are only accurate in a small neighborhood around the baseline design, they can be valid for a design space when the optimization study is limited to a very small range.
  • Finally, using only a few of the most important variables, identified during the screening study, a local optimization study was performed over a narrow range of the parameter space. Often, this study used a simple response surface or a gradient-based search algorithm because the number of variables and the variable ranges were very small.
This process usually resulted in only an incremental improvement in the design, which was acceptable considering the available technology.

Now, fast-forward to the present. Computing power is virtually scalable. CAE models are accurate and reliable most of the time. And, optimization algorithms can effectively search large and complex design spaces. Yet many engineers and scientists still perform design optimization the way it was done twenty years ago, when severe limitations on computing, modeling and search algorithms ruled the day.

In essence, we have substituted modern computers, models and algorithms into our old school workflow. While we have occasionally seen some improvements, the results have mostly been disappointing.

Emboldened by the power of new technology, we define design spaces that are broader and contain more variables. But, in doing so, we often violate some of the assumptions that made our old school workflow valid, and we introduce inefficiencies into the process.

Complicating the situation further, we have introduced a long and growing list of optimization algorithms to address a host of different problem types. Even optimization experts can't figure out the best method to use for a given problem.

It's no surprise why many engineers are convinced that optimization doesn't work. They are dragging that chainsaw back and forth against the tree, expecting it to work better than a handsaw!

I'm not saying that we should avoid larger design spaces with more variables. This is an important part of finding better, and even innovative, solutions. And advanced optimization strategies are certainly important for solving today's challenging problems. But we won't get consistently satisfactory results until we rethink how we use modern optimization technology.

Recently, a new paradigm has emerged for design optimization - one that is enabled by game-changing discoveries in optimization search technology and that leverages ongoing advances in computing power and virtual prototyping. This paradigm is free of the constraints imposed by previous technology and is based on a set of principles that allow a more natural flow of thought and effort:

  1. Start with a good concept, not necessarily a good design. (Let the optimizer do the work of searching for good designs.)
  2. Optimize early and often. (Not just at the end of the design cycle or after all other means have been exhausted.)
  3. Define the design problem you need to solve. (Not the one that can be solved by a certain optimization strategy.)
  4. Optimize the system interactions. (Not the components.)
  5. Let the optimization algorithm figure out how to search the design space. (There's often no way to guess ahead of time which search method and tuning parameters will work best.)
  6. Don't perform optimization using models of your models. (Response surface or surrogate models often increase effort and error.)
  7. Be an engaged participant in the optimization search. (Leverage your knowledge and intuition during a collaborative search process.)
  8. Care about the sensitivities of your final design. (Not those of your initial guess, which often have no bearing on the final design.)
A design optimization process based on these principles is simpler and consistently yields better solutions in less time. Implementing this process requires solid engineering skills, but you don't need to be an expert in optimization theory.

Design optimization software built around these principles is also simpler to use, because the workflow for defining, solving and post-processing an optimization study is cleaner, with fewer steps and tricky decisions to make.

Over the next several weeks, I will provide more detail about each of the above principles. In the meantime, be careful with that chainsaw!