Consider the consequences of maximizing iteration throughput for a typical manual design process. Let’s assume a simple, but familiar, scenario in which each iteration involves the following steps:

- Create a CAD model of the geometry,
- Build a math model to predict performance,
- Execute the math model, and
- Interpret its results.

down each task and maximize its efficiency. We assign the right people and the right tools for every activity and maximize the productive use of all resources by eliminating downtime. With this plan in mind, we optimize each of the steps in our process:

- First, we refine the CAD modeling process. We organize teams of experienced design engineers to rapidly generate models of the system geometry for each design variation. As soon as one model is complete, the next iteration begins.
- Each set of CAD models is sent to our computer aided engineering (CAE) team, which builds math models (virtual prototypes) for each design variation. When possible, we automate the generation of meshed analysis models and parcel out model-building tasks to specialists to further increase efficiency.
- We configure simulation hardware and software to match the complexity of our typical math models and to achieve the desired solution time. While the math model of one design is running, the next ones are being built.
- Periodically, the leadership team huddles to evaluate the results and determine how well the performance targets are being met. Then, new design directions are chosen, based largely on intuition, and the churning continues.

But even though we may have increased the efficiency of each step in the process, and maximized the productivity of each team member, the result is just a sizable gain in iteration throughput, not better designs.

Why? Because the individual or local components of the iteration process have been optimized, but the overall global process has not been improved. In fact, too often, the resulting global process does not deliver better designs at all.

When faced with a challenging problem, the tendency is to break it up into manageable parts that we can solve separately. This often leads to a locally optimized solution or process that ignores the intricate and essential interactions among the parts. A better strategy is to focus on the interactions, and embrace the complexity of the situation, to uncover even greater advantages.

In our present example, the separate steps are locally optimized based on a false definition of productivity. Instead of waiting for results of the current iteration to shape the direction of future designs, intuition-assisted ad-hoc decisions are often made for the sake of keeping team members busy. The independent steps remain out of sync, and the process never gains the traction it needs to propel the design toward its target.

Of even greater concern is our inability to interpret a growing amount of math data as the number of iterations increases. It should be clear that quickly performing many random design iterations is not a good idea, and this is certainly not the intention. Yet we use up most of our intuition during the first few iterations, so the later stages of our process often rely more heavily on trial-and-error than we would like to admit.

So, more iterations won’t automatically lead to better designs. In order to benefit from these increased efforts, we must make better use of the knowledge created by previous iterations to intelligently navigate our search for an optimized solution.

The point is not that higher iteration throughput is bad, but maximized manual churning should not be our primary goal. Instead of more designs in less time, we should be aiming for better designs, faster. There is a profound difference between these two goals.

- Login or register to post comments
- Printer-friendly version
- Email to friend

- April 2012 (1)
- March 2012 (2)
- February 2012 (2)
- January 2012 (1)
- September 2011 (2)
- July 2011 (1)
- March 2011 (1)
- February 2011 (2)
- January 2011 (4)
- December 2010 (2)

No related content found.