Login / Register to comment

Series Hybrid vs. Parallel Hybrid

April 18, 2012 by Ron Averill

Hybrid refers to something that is made up of two or more diverse ingredients. The goal in combining them is to capture and merge the advantages of each ingredient, while overcoming any disadvantages. But ingredients can be combined in many ways, resulting in considerable variation in performance depending on how they are combined.

In optimization search algorithms, as with electric vehicles, there are two main categories of hybrids: series and parallel. To better understand the basic
differences between the series and parallel hybrid approaches, let’s consider a simple illustration.

Passing batonSuppose a team of people needs to carry an object a long distance. In a series hybrid strategy, one person will carry the object for a while, and then someone else will take the load and carry it a bit further. This “tag team” approach continues until the required distance is covered. This series approach might work well if the object is lightweight and small. But if the object is heavy or awkwardly shaped, then it will be difficult for one person to carry it even a short distance, if he or she can move it at all.

RowersIn this situation, it is clearly more effective for two or more people to carry the object together in a parallel hybrid approach. By working together in a well-coordinated effort, the load can be shared in a way that allows each participant to contribute to the task. Each contribution, however small, reduces the load on other team members, allowing the group to carry the load faster and further with less fatigue. Drivers of horse-drawn wagons, dog sleds and Christmas sleighs discovered this truth a long time ago.

Series hybrid optimization
Turning our attention to optimization, a series hybrid algorithm is developed by starting with one search algorithm, and then switching to another one (using a different strategy than that of the first algorithm) to continue the search. There is no limit to the number of different search strategies that can be used in this sequential manner.

Typically, a series hybrid algorithm begins with a search method that is good at global exploration, such as a Genetic Algorithm, and ends with a local refinement strategy, such as a gradient-based algorithm. Various other search methods can be sandwiched between these two. On some problems, this type of series optimization algorithm has been shown to perform reasonably well compared to monolithic (single-strategy) algorithms, when an appropriate set of algorithms and tuning parameters has been chosen.

How well a series hybrid optimization strategy performs depends on the specific algorithms and tuning parameters used at each stage of the search. Because each algorithm is working alone, the progress made at any time depends on how effective the selected method is for that problem and what it does with the information provided by previous search methods.

As I’ve mentioned in other posts, it is usually impossible to know which algorithms or values of tuning parameters will work well on a problem before it is solved. So, series hybrid algorithms have the same fatal flaw as most monolithic strategies, except the number of unknowns is now multiplied by the number of different strategies used.

Moreover, additional unknowns are introduced, such as the order of the strategies and when to stop one strategy in favor of another. Default values for these parameters may or may not work well for your current problem.

Parallel hybrid optimization
Parallel hybrid algorithms, like SHERPA (in HEEDS® MDO), overcome many of the shortcomings of series hybrid algorithms. In this strategy, multiple optimization methods actually work simultaneously to solve a problem in a collaborative fashion. Rather than contributing sequentially, these methods work together to search a design space and identify optimized solutions, like many hands helping to carry a heavy load.

As with any good team, a parallel hybrid algorithm requires good leadership, communication, coordination, and accountability. These attributes are built into the algorithm’s infrastructure from the start.

Instead of separately exploring and refining at different stages of a search, a parallel hybrid algorithm enables these two essential activities to take place concurrently and synergistically! This not only speeds up the search but also makes it more likely to find the global optimum.

In a series hybrid algorithm, the search history can be used to determine which individual algorithm(s) made the most meaningful contribution to the search. But this is not possible with a parallel hybrid algorithm, because each algorithm behaves very differently as part of a team than it would individually.

Nevertheless, there are ways to hold an individual search strategy accountable for its contributions within a parallel hybrid algorithm, and those methods that do not contribute enough over time can be replaced by new methods or have their resources transferred to existing methods that are contributing at a higher level.

The characteristics of a well-designed parallel hybrid optimization algorithm include shared discovery, intellectual diversity, synergistic search, and greater robustness. Oh, and better designs, faster!



Collaborative Optimization

March 19, 2012 by Ron Averill

Collaborative optimizationMany engineers still resist the use of optimization algorithms to help improve their designs. Perhaps they feel that their hard-earned intuition is just too important to the solution process. In many cases, they are right.

At the same time, most optimization algorithms still refuse to accept input from engineers to help guide their mathematical search. The assumption is that the human brain cannot possibly decipher complex relationships among multiple system responses that depend upon large numbers of connected variables. Unfortunately, this is true.

Is this a case of irreconcilable differences? Or is it simply an example of everyone wanting to be the teacher, and no one wanting to be the student? I’m reminded of the Latin proverb:

“By learning you will teach; by teaching you will learn.”
Surely engineering intuition can benefit from the results of mathematical exploration, and vice-versa. It seems almost obvious.

So why do engineers and optimization algorithms prefer to work solo? I don’t believe they prefer this. I think it is more a matter of not knowing how to collaborate, or not having the tools to facilitate this interaction.

Fortunately, modern optimization software tools like HEEDS now have features that encourage engineers to learn from the intermediate results of an optimization study and to share intuition-based insights with the optimization algorithm during a search. This collaborative optimization process leverages our two most powerful design tools – human experience and computers.

Consider the following example:

  • An engineer uses intuition and experience to define the goals of an optimization problem and a baseline (starting) design.
  • An optimization search algorithm then begins to explore the design space to uncover mathematical relationships that can lead to an optimized design, all the while sharing its progress and discoveries with the engineer.
  • While monitoring, validating and interpreting these intermediate search results, the engineer starts to learn what makes some designs better than others. This new understanding causes the engineer’s intuition to practically blurt out, “If design B is better than design A, then design C should be even better!” Of course, the optimization algorithm might eventually discover design C on its own, but it would surely take a lot longer to do so.
  • The engineer shares his insight with the optimization algorithm, which happily accepts the input and puts it to use immediately. If the engineer was correct, then the algorithm now has new information that will accelerate its search. If the engineer’s intuition did not lead to a better design, then the algorithm has only spent one design evaluation to discover this, and the new data may still have some valuable nuggets that can be exploited later in the search.
  • The circular process of exploring monitoring interpreting sharing is continuous throughout the search process, leading to better designs in much less time than was previously possible.
  • The enhanced communication between the engineer and the optimization algorithm builds a strong interdependent relationship between the two, leveraging the strength of each.
Collaborative optimization tears down one of the most common objections that experienced engineers have to using optimization methods. It not only makes full use of an engineer’s intuition, but improves that intuition through experience gained from mathematical exploration of the design space. This is accelerated learning at its best.

Further, this is not one of those nice ideas that looks good on paper but doesn’t work well in practice. This technique has already been used very successfully on many challenging design problems, including composite aircraft, crashworthy cars and insulated vaccine carriers.

There is now overwhelming evidence that a more intimate coupling of intuition with a hybrid, adaptive optimization algorithm can solve many challenging problems previously thought to be intractable.

Of course, in order to find the best solutions many optimization algorithms tend to explore a design space broadly, even spending some time in those regions of the space that don’t yield any good designs. So the process helps us to understand not only what makes a good design, but also why some designs perform poorly. This reminds me of another important proverb:

“Wise men learn by other men's mistakes, fools by their own.”

Triathlon

March 5, 2012 by Ron Averill

TraithletesI am currently training to compete in my first sprint triathlon race. Well, compete may be an exaggeration, and there won’t be much sprinting. But I do hope to cross the finish line before the sun sets.

If you are unfamiliar with the sport, a sprint triathlon is a race with three components. Participants swim about one-half mile in a lake, then ride a bike about 12 miles along a marked road course, and finally run 3.1 miles to reach the finish line.

To an athlete, this race sounds like a fun challenge. To an engineer, it is a fascinating multi-objective optimization problem.

Clearly, the primary objective is to cross the finish line in the shortest time possible. But it’s still important to remember that the total race time is the sum of the times necessary to complete three very different parts of the race – the swim, the bike and the run.

In optimization, this is often referred to as a summed objective problem. It is a good way to handle optimization problems containing multiple objectives that do not directly compete with one another. In other words, improving the value of one objective does not inevitably worsen another one. For example, swimming faster does not necessarily make you run slower, so these two objectives can naturally be summed together when seeking an optimal race strategy.

However, non-competing objectives might still be strongly coupled. In a triathlon, if you spend too much energy swimming faster, then you won’t have enough energy remaining to bike or run at your best, resulting in a slower overall race time. Based on the athlete's level of skill and fitness, there is an ideal swimming pace that preserves just enough energy to perform optimally in the bike and run portions of the race. Similar arguments can be made about the effect of the bicycling pace on the run performance.

So there is still a trade-off among the three objectives, but not enough to warrant treating the three objectives separately, as in a Pareto optimization. When the ultimate objective is pretty clear and the trade-offs are due more to interactions than competition, a summed objective approach is usually recommended.

Interactions also play a role in how we define the scope of our optimization problem. Due to lack of time or interest, many triathletes focus their training on just one or two of the triathlon events, paying only minimal attention to the other one(s). This short-sighted approach usually leads to poor overall performance in the race. As already noted, improving performance in one or two events does not guarantee any improvement at all in the overall race time.

To find a truly optimal strategy, you must optimize the interactions among the components. Any improvement to the parts must be considered in light of its contribution to the whole.

The same is true for optimization problems within engineering, science and business. We often focus our attention on improving a part within a system, perhaps the part that is failing or is most expensive. In doing so, we ignore the interactions among the various parts in the system, and we severely limit the scope and the potential benefits of the optimization search process.

Consider the common goals of reducing mass or cost. It may be that by adding mass or cost to a given component (blasphemy!), we can then reduce the mass or cost in other components to achieve an overall system level improvement. Isn’t this the ultimate goal, after all? But when we focus on individual parts and ignore their interactions, these opportunities remain hidden.

So yes, I do plan to swim slowly during my triathlon race in order to improve my overall time. That’s my story, and I’m sticking to it.