Sei sulla pagina 1di 296

Design Exploration User's Guide

ANSYS, Inc. Release 15.0


Southpointe November 2013
275 Technology Drive
Canonsburg, PA 15317 ANSYS, Inc. is
ansysinfo@ansys.com certified to ISO
9001:2008.
http://www.ansys.com
(T) 724-746-3304
(F) 724-514-9494
Copyright and Trademark Information

2013 SAS IP, Inc. All rights reserved. Unauthorized use, distribution or duplication is prohibited.

ANSYS, ANSYS Workbench, Ansoft, AUTODYN, EKM, Engineering Knowledge Manager, CFX, FLUENT, HFSS and any
and all ANSYS, Inc. brand, product, service and feature names, logos and slogans are registered trademarks or
trademarks of ANSYS, Inc. or its subsidiaries in the United States or other countries. ICEM CFD is a trademark used
by ANSYS, Inc. under license. CFX is a trademark of Sony Corporation in Japan. All other brand, product, service
and feature names or trademarks are the property of their respective owners.

Disclaimer Notice

THIS ANSYS SOFTWARE PRODUCT AND PROGRAM DOCUMENTATION INCLUDE TRADE SECRETS AND ARE CONFID-
ENTIAL AND PROPRIETARY PRODUCTS OF ANSYS, INC., ITS SUBSIDIARIES, OR LICENSORS. The software products
and documentation are furnished by ANSYS, Inc., its subsidiaries, or affiliates under a software license agreement
that contains provisions concerning non-disclosure, copying, length and nature of use, compliance with exporting
laws, warranties, disclaimers, limitations of liability, and remedies, and other provisions. The software products
and documentation may be used, disclosed, transferred, or copied only in accordance with the terms and conditions
of that software license agreement.

ANSYS, Inc. is certified to ISO 9001:2008.

U.S. Government Rights

For U.S. Government users, except as specifically granted by the ANSYS, Inc. software license agreement, the use,
duplication, or disclosure by the United States Government is subject to restrictions stated in the ANSYS, Inc.
software license agreement and FAR 12.212 (for non-DOD licenses).

Third-Party Software

See the legal information in the product help files for the complete Legal Notice for ANSYS proprietary software
and third-party software. If you are unable to access the Legal Notice, please contact ANSYS, Inc.

Published in the U.S.A.


Table of Contents
ANSYS DesignXplorer Overview ................................................................................................................ 1
Introduction to ANSYS DesignXplorer ..................................................................................................... 1
Available Tools .................................................................................................................................. 1
Design Exploration: What to Look for in a Typical Session ................................................................... 3
Initial Simulation: Analysis of the Product Under all Operating Conditions .......................................... 3
Identify Design Candidates ............................................................................................................... 3
Assess the Robustness of the Design Candidates ............................................................................... 4
Determine the Number of Parameters ............................................................................................... 5
Increase the Response Surface Accuracy ........................................................................................... 5
Limitations ............................................................................................................................................. 5
ANSYS DesignXplorer Licensing Requirements ........................................................................................ 5
The User Interface .................................................................................................................................. 6
Parameters ........................................................................................................................................... 10
Design Points ....................................................................................................................................... 12
Response Points ................................................................................................................................... 13
Workflow .............................................................................................................................................. 13
Adding Design Exploration Templates to the Schematic .................................................................. 14
Duplicating Existing DesignXplorer Systems .................................................................................... 14
Running Design Exploration Analyses ............................................................................................. 15
Monitoring Design Exploration Analyses ......................................................................................... 15
Reporting on the Design Exploration Analysis ................................................................................. 16
Using DesignXplorer Charts .................................................................................................................. 16
Exporting and Importing Data .............................................................................................................. 18
Design Exploration Options .................................................................................................................. 19
DesignXplorer Systems and Components ................................................................................................ 27
What is Design Exploration? .................................................................................................................. 27
DesignXplorer Systems ......................................................................................................................... 27
Parameters Correlation System ....................................................................................................... 27
Response Surface System ............................................................................................................... 28
Goal Driven Optimization Systems .................................................................................................. 28
Six Sigma Analysis System ............................................................................................................... 28
DesignXplorer Components .................................................................................................................. 29
Design of Experiments Component Reference ................................................................................. 29
Parameters Parallel Chart .......................................................................................................... 30
Design Points vs Parameter Chart .............................................................................................. 31
Parameters Correlation Component Reference ................................................................................ 31
Correlation Scatter Chart ........................................................................................................... 33
Correlation Matrix Chart ........................................................................................................... 34
Determination Matrix Chart ...................................................................................................... 34
Sensitivities Chart ..................................................................................................................... 34
Determination Histogram Chart ................................................................................................ 35
Response Surface Component Reference ........................................................................................ 35
Response Chart ........................................................................................................................ 38
Local Sensitivity Charts ............................................................................................................. 39
Spider Chart ............................................................................................................................. 39
Optimization Component Reference ............................................................................................... 40
Convergence Criteria Chart ....................................................................................................... 43
History Chart ............................................................................................................................ 43
Candidate Points Results ........................................................................................................... 44
Tradeoff Chart .......................................................................................................................... 44

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. iii
Design Exploration User's Guide

Samples Chart .......................................................................................................................... 45


Sensitivities Chart ..................................................................................................................... 45
Six Sigma Analysis Component Reference ....................................................................................... 46
Design of Experiments (SSA) ..................................................................................................... 46
Six Sigma Analysis .................................................................................................................... 48
Sensitivities Chart (SSA) ............................................................................................................ 49
Using Parameters Correlation .................................................................................................................. 51
Sample Generation ............................................................................................................................... 52
Running a Parameters Correlation ......................................................................................................... 53
Viewing the Quadratic Correlation Information ..................................................................................... 55
Determining Significance ...................................................................................................................... 55
Viewing Significance and Correlation Values .......................................................................................... 55
Parameters Correlation Charts ............................................................................................................... 55
Using the Correlation Matrix Chart .................................................................................................. 56
Using the Correlation Scatter Chart ................................................................................................. 57
Using the Determination Matrix Chart ............................................................................................. 59
Using the Determination Histogram Chart ...................................................................................... 60
Using the Sensitivities Chart ............................................................................................................ 61
Using Design of Experiments ................................................................................................................... 63
Setting Up the Design of Experiments ................................................................................................... 63
Design of Experiments Types ................................................................................................................ 64
Central Composite Design (CCD) ..................................................................................................... 64
Optimal Space-Filling Design (OSF) ................................................................................................. 65
Box-Behnken Design ...................................................................................................................... 66
Custom .......................................................................................................................................... 67
Custom + Sampling ........................................................................................................................ 67
Sparse Grid Initialization ................................................................................................................. 68
Latin Hypercube Sampling Design (LHS) ......................................................................................... 68
Number of Input Parameters for DOE Types ........................................................................................... 69
Comparison of LHS and OSF DOE Types ................................................................................................. 70
Using a Central Composite Design DOE ................................................................................................. 71
Upper and Lower Locations of DOE Points ............................................................................................. 73
DOE Matrix Generation ......................................................................................................................... 74
Importing and Copying Design Points ................................................................................................... 74
Using Response Surfaces .......................................................................................................................... 77
Meta-Model Types ................................................................................................................................ 77
Standard Response Surface - Full 2nd-Order Polynomial .................................................................. 77
Kriging ........................................................................................................................................... 78
Using Kriging Auto-Refinement ................................................................................................ 79
Kriging Auto-Refinement Properties .......................................................................................... 81
Kriging Convergence Curves Chart ............................................................................................ 82
Non-Parametric Regression ............................................................................................................. 83
Neural Network .............................................................................................................................. 83
Sparse Grid ..................................................................................................................................... 84
Using Sparse Grid Refinement ................................................................................................... 86
Sparse Grid Auto-Refinement Properties ................................................................................... 87
Sparse Grid Convergence Curves Chart ..................................................................................... 88
Meta-Model Refinement ....................................................................................................................... 89
Working with Meta-Models ............................................................................................................. 89
Changing the Meta-Model .............................................................................................................. 90
Performing a Manual Refinement .................................................................................................... 92
Goodness of Fit ..................................................................................................................................... 92

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
iv of ANSYS, Inc. and its subsidiaries and affiliates.
Design Exploration User's Guide

Predicted versus Observed Chart .................................................................................................... 95


Advanced Goodness of Fit Report ................................................................................................... 96
Using Verification Points ................................................................................................................. 96
Min-Max Search .................................................................................................................................... 98
Response Surface Charts ....................................................................................................................... 99
Using the Response Chart ............................................................................................................. 101
Understanding the Response Chart Display ............................................................................. 102
Using the 2D Contour Graph Response Chart .......................................................................... 104
Using the 3D Contour Graph Response Chart .......................................................................... 105
Using the 2D Slices Response Chart ......................................................................................... 107
Response Chart: Example ........................................................................................................ 110
Response Chart: Properties ..................................................................................................... 111
Using the Spider Chart .................................................................................................................. 112
Using Local Sensitivity Charts ........................................................................................................ 112
Understanding the Local Sensitivities Display .......................................................................... 113
Using the Local Sensitivity Chart ............................................................................................. 115
Local Sensitivity Chart: Example ........................................................................................ 116
Local Sensitivity Chart: Properties ...................................................................................... 117
Using the Local Sensitivity Curves Chart .................................................................................. 118
Local Sensitivity Curves Chart: Example ............................................................................. 120
Local Sensitivity Curves Chart: Properties ........................................................................... 123
Using Goal Driven Optimization ............................................................................................................. 125
Creating a Goal Driven Optimization System ....................................................................................... 126
Transferring Design Point Data for Direct Optimization ........................................................................ 127
Goal Driven Optimization Methods ..................................................................................................... 128
Performing a Screening Optimization ............................................................................................ 130
Performing a MOGA Optimization ................................................................................................. 132
Performing an NLPQL Optimization ............................................................................................... 135
Performing an MISQP Optimization ............................................................................................... 137
Performing an Adaptive Single-Objective Optimization ................................................................. 139
Performing an Adaptive Multiple-Objective Optimization .............................................................. 142
Performing an Optimization with an External Optimizer ................................................................ 146
Locating and Downloading Available Extensions ..................................................................... 146
Installing an Optimization Extension ....................................................................................... 146
Loading an Optimization Extension ......................................................................................... 147
Selecting an External Optimizer .............................................................................................. 147
Setting Up an External Optimizer Project ................................................................................. 148
Performing the Optimization .................................................................................................. 148
Defining the Optimization Domain ..................................................................................................... 148
Defining Input Parameters ............................................................................................................ 149
Defining Parameter Relationships ................................................................................................. 150
Defining Optimization Objectives and Constraints ............................................................................... 153
Working with Candidate Points ........................................................................................................... 157
Viewing and Editing Candidate Points in the Table View ................................................................ 158
Retrieving Intermediate Candidate Points ..................................................................................... 159
Creating New Design, Response, Refinement, or Verification Points from Candidates ....................... 160
Verify Candidates by Design Point Update ..................................................................................... 161
Goal Driven Optimization Charts and Results ....................................................................................... 162
Using the Convergence Criteria Chart ............................................................................................ 162
Using the Convergence Criteria Chart for Multiple-Objective Optimization ............................... 163
Using the Convergence Criteria Chart for Single-Objective Optimization .................................. 164
Using the History Chart ................................................................................................................. 165

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. v
Design Exploration User's Guide

Working with the History Chart in the Chart View .................................................................... 166
Viewing History Chart Sparklines in the Outline View ............................................................... 169
Using the Objective/Constraint History Chart .......................................................................... 170
Using the Input Parameter History Chart ................................................................................. 171
Using the Parameter Relationships History Chart ..................................................................... 172
Using the Candidate Points Results ............................................................................................... 173
Understanding the Candidate Points Results Display ............................................................... 173
Candidate Points Results: Properties ........................................................................................ 174
Using the Sensitivities Chart (GDO) ............................................................................................... 175
Using the Tradeoff Chart ............................................................................................................... 176
Using the Samples Chart ............................................................................................................... 178
Using Six Sigma Analysis ........................................................................................................................ 181
Performing a Six Sigma Analysis .......................................................................................................... 181
Using Statistical Postprocessing .......................................................................................................... 181
Tables (SSA) .................................................................................................................................. 181
Using Parameter Charts (SSA) ........................................................................................................ 182
Using the Sensitivities Chart (SSA) ................................................................................................. 182
Statistical Measures ............................................................................................................................ 182
Working with DesignXplorer .................................................................................................................. 185
Working with Parameters .................................................................................................................... 185
Input Parameters .......................................................................................................................... 186
Defining Discrete Input Parameters ......................................................................................... 187
Defining Continuous Input Parameters .................................................................................... 188
Defining Manufacturable Values .............................................................................................. 188
Changing Input Parameters .................................................................................................... 190
Changing Manufacturable Values ............................................................................................ 191
Setting Up Design Variables .......................................................................................................... 191
Setting Up Uncertainty Variables ................................................................................................... 191
Output Parameters ....................................................................................................................... 193
Working with Design Points ................................................................................................................ 193
Design Point Update Order ........................................................................................................... 195
Design Point Update Location ....................................................................................................... 195
Preserving Generated Design Points to the Parameter Set .............................................................. 196
Exporting Generated Design Points to Separate Projects ............................................................... 197
Inserting Design Points ................................................................................................................. 197
Cache of Design Point Results ....................................................................................................... 197
Raw Optimization Data ................................................................................................................. 198
Design Point Log Files ................................................................................................................... 198
Failed Design Points ...................................................................................................................... 200
Preventing Design Point Update Failures ................................................................................. 200
Preserving Design Points and Files .......................................................................................... 202
Handling Failed Design Points ................................................................................................. 203
Working with Sensitivities ................................................................................................................... 204
Working with Tables ............................................................................................................................ 205
Viewing Design Points in the Table View ........................................................................................ 206
Editable Output Parameter Values ................................................................................................. 206
Copy/Paste ................................................................................................................................... 207
Exporting Table Data ..................................................................................................................... 207
Importing Data from a CSV File ..................................................................................................... 207
Working with Remote Solve Manager and DesignXplorer .................................................................... 208
Working with Design Exploration Project Reports ................................................................................ 210
DesignXplorer Theory ............................................................................................................................. 213

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
vi of ANSYS, Inc. and its subsidiaries and affiliates.
Design Exploration User's Guide

Understanding Response Surfaces ...................................................................................................... 213


Central Composite Design ............................................................................................................. 214
Box-Behnken Design ..................................................................................................................... 215
Standard Response Surface - Full 2nd-Order Polynomial algorithms ............................................... 216
Kriging Algorithms ....................................................................................................................... 217
Non-Parametric Regression Algorithms ......................................................................................... 218
Sparse Grid Algorithms ................................................................................................................. 221
Understanding Goal Driven Optimization ............................................................................................ 224
Principles (GDO) ........................................................................................................................... 225
Guidelines and Best Practices (GDO) .............................................................................................. 227
Goal Driven Optimization Theory .................................................................................................. 229
Sampling for Constrained Design Spaces ................................................................................. 230
Shifted Hammersley Sampling (Screening) .............................................................................. 232
Single-Objective Optimization Methods .................................................................................. 233
Nonlinear Programming by Quadratic Lagrangian (NLPQL) ................................................ 233
Mixed-Integer Sequential Quadratic Programming (MISQP) ............................................... 240
Adaptive Single-Objective Optimization (ASO) .................................................................. 241
Multiple-Objective Optimization Methods ............................................................................... 244
Pareto Dominance in Multi-Objective Optimization .......................................................... 244
Convergence Criteria in MOGA-Based Multi-Objective Optimization ................................. 245
Multi-Objective Genetic Algorithm (MOGA) ....................................................................... 246
Adaptive Multiple-Objective Optimization (AMO) .............................................................. 251
Decision Support Process ........................................................................................................ 253
Understanding Six Sigma Analysis ....................................................................................................... 257
Principles (SSA) ............................................................................................................................. 258
Guidelines for Selecting SSA Variables ........................................................................................... 260
Choosing and Defining Uncertainty Variables .......................................................................... 260
Uncertainty Variables for Response Surface Analyses ......................................................... 260
Choosing a Distribution for a Random Variable .................................................................. 260
Measured Data ........................................................................................................... 260
Mean Values, Standard Deviation, Exceedance Values ................................................... 261
No Data ...................................................................................................................... 262
Distribution Functions ...................................................................................................... 263
Sample Generation ....................................................................................................................... 267
Weighted Latin Hypercube Sampling ............................................................................................ 267
Postprocessing SSA Results ........................................................................................................... 267
Histogram .............................................................................................................................. 267
Cumulative Distribution Function ............................................................................................ 268
Probability Table ..................................................................................................................... 269
Statistical Sensitivities in a Six Sigma Analysis .......................................................................... 270
Six Sigma Analysis Theory ............................................................................................................. 271
Troubleshooting ..................................................................................................................................... 279
Appendices ............................................................................................................................................. 281
Extended CSV File Format ................................................................................................................... 281
Index ........................................................................................................................................................ 283

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. vii
Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
viii of ANSYS, Inc. and its subsidiaries and affiliates.
ANSYS DesignXplorer Overview
The following links provide quick access to information concerning the ANSYS DesignXplorer and its
use:

Limitations (p. 5)

What is Design Exploration? (p. 27)

The User Interface (p. 6)

Parameters (p. 10)

Design Points (p. 12)

Response Points (p. 13)

Workflow (p. 13)

Design Exploration Options (p. 19)

"DesignXplorer Systems and Components" (p. 27)

Introduction to ANSYS DesignXplorer


Because a good design point is often the result of a trade-off between various objectives, the exploration
of a given design cannot be performed by using optimization algorithms that lead to a single design
point. It is important to gather enough information about the current design so as to be able to answer
the so-called what-if questions quantifying the influence of design variables on the performance of
the product in an exhaustive manner. By doing so, the right decisions can be made based on accurate
information - even in the event of an unexpected change in the design constraints.

Design exploration describes the relationship between the design variables and the performance of the
product by using Design of Experiments (DOE), combined with response surfaces. DOE and response
surfaces provide all of the information required to achieve Simulation Driven Product Development.
Once the variation of the performance with respect to the design variables is known, it becomes easy
to understand and identify all changes required to meet the requirements for the product. Once the
response surfaces are created, the information can be shared in easily understandable terms: curves,
surfaces, sensitivities, etc. They can be used at any time during the development of the product without
requiring additional simulations to test a new configuration.

Available Tools
DesignXplorer has a powerful suite of DOE schemes:

Central Composite Design (CCD) (p. 64)

Optimal Space-Filling Design (OSF) (p. 65)

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 1
ANSYS DesignXplorer Overview

Box-Behnken Design (p. 66)

Custom (p. 67)

Custom + Sampling (p. 67)

Sparse Grid Initialization (p. 68)

Latin Hypercube Sampling Design (LHS) (p. 68)

CCD provides a traditional DOE sampling set, while Optimal Space-Fillings objective is to gain the
maximum insight with the fewest number of points, a feature that is very useful when the computation
time available to the user is limited.

After sampling, design exploration provides several different meta-models to represent the simulations
responses. The following types of response surfaces are available:

Standard Response Surface - Full 2nd-Order Polynomial (p. 77)

Kriging (p. 78)

Non-Parametric Regression (p. 83)

Neural Network (p. 83)

Sparse Grid (p. 84)

These meta-models can accurately represent highly nonlinear responses, such as those found in high
frequency electromagnetics.

Once the simulations responses are characterized, DesignXplorer supplies the following types of optim-
ization algorithms:

Shifted Hammersley Sampling (Screening) (p. 232)

Multi-Objective Genetic Algorithm (MOGA) (p. 246)

Nonlinear Programming by Quadratic Lagrangian (NLPQL) (p. 233)

Mixed-Integer Sequential Quadratic Programming (MISQP) (p. 240)

Adaptive Single-Objective Optimization (ASO) (p. 241)

Adaptive Multiple-Objective Optimization (AMO) (p. 251)

You can also use extensions to integrate external optimizers into the DX workflow. For more inform-
ation, see Performing an Optimization with an External Optimizer (p. 146).

Several graphical tools are available to investigate a design: sensitivity plots, correlation matrices, curves,
surfaces, trade-off plots and parallel charts with Pareto Front display, and Spider charts.

Correlation matrix techniques are also provided to help the user identify the key parameters of a design
before creating response surfaces.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
2 of ANSYS, Inc. and its subsidiaries and affiliates.
Introduction to ANSYS DesignXplorer

Design Exploration: What to Look for in a Typical Session


The main purpose of design exploration is to identify the relationship between the performance of the
product (maximum stress, mass, fluid flow, velocities, etc.) and the design variables (dimensions, loads,
material properties, etc.). Based on these results, the analyst will be able to influence the design so as
to meet the products requirements. He will be able to identify the key parameters of the design and
how they influence the performance.

Design exploration does provide tools to analyze a parametric design with a reasonable number of
parameters. The Response Surface methods described here are suitable for problems using about 10
to 15 input parameters.

Initial Simulation: Analysis of the Product Under all Operating Conditions


The first step of any design simulation is to create the simulation model. The simulation model can use
anything from a single physics up to a complex multiphysics simulation involving multiple conditions
and physics coupling.

In addition to performing the standard simulation, this step is also used to define the parameters to be
investigated. The input parameters (also called design variables) are identified, and may include CAD
parameters, loading conditions, material properties, etc.

The output parameters (also called performance indicators) are chosen from the simulation results and
may include maximum stresses, fluid pressure, velocities, temperatures, or masses, and can also be
custom defined. Product cost could be a custom defined parameter based on masses, manufacturing
constraints, etc.

CAD parameters are defined from a CAD package or from the ANSYS DesignModeler application. Mater-
ial properties are found in the Engineering Data section of the project, while other parameters will have
their origin in the simulation model itself. Output parameters will be defined from the various simulation
environments (mechanics, CFD, etc.). Custom parameters are defined directly in the Parameter Set tab
accessed via the Parameter Set bar in the Project Schematic.

Identify Design Candidates


Once the initial model has been created and parameters defined, the next step in the session is to
create a response surface. After inserting a Response Surface system in the project, you need to define
the design space by giving the minimum and maximum values to be considered for each of the input
variables. Based on this information, the Design of Experiment (DOE) part of the Response Surface system
will create the design space sampling. Note that this sampling depends upon the choice made for the
DOE scheme usually, the default CCD scheme will provide good accuracy for the final approximation.
Then, the DOE needs to be computed.

Once the DOE has been updated, a response surface is created for each output parameter. A response
surface is an approximation of the response of the system. Its accuracy depends on several factors:
complexity of the variations of the output parameters, number of points in the original DOE, and choice
of the response surface type. Several main types of response surfaces are available in DesignXplorer.
As a starting point, the Standard Response Surface (based on a modified quadratic formulation) will
provide satisfying results when the variations of the output parameters is mild, while the Kriging scheme
will be used for stronger variations.

After the response surfaces have been computed, the design can be thoroughly investigated using a
variety of graphical and numerical tools, and valid design points identified by optimization techniques.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 3
ANSYS DesignXplorer Overview

Usually, the investigation will start with the sensitivity graphs. This bar or pie chart will graphically show
how much the output parameters are locally influenced by the input parameters around a given response
point. Note that varying the location of the response point may provide totally different graphs.
Thinking of the hill/valley analogy, if the response point is in a flat valley, the influence of the input
parameters will be small. If the point is at the top of a steep hill, the influence of the parameters will
be strong. The sensitivity graphs provide the first indication about the relative influence of the input
parameters.

The response surfaces will provide curves or surfaces that show the variation of one output parameter
with respect to one or two input parameters at a time. These curves/surfaces also are dependent on
the response point.

Both sensitivity charts and response surfaces are key tools for the analyst to be able to answer the
What-if questions (What parameter should we change if we want to reduce the cost?).

DesignXplorer provides additional tools to identify design candidates. While they could be determined
by a thorough investigation of the curves, it might be convenient to be guided automatically to some
interesting candidates. Access to optimization techniques that will find design candidates from the re-
sponse surfaces or other components containing design points is provided by the Goal Driven Optim-
ization (GDO) systems. There are two types of GDO systems: Response Surface Optimization and Direct
Optimization. These systems can be dragged and dropped over an existing Response Surface system
so as to share this portion of the data. For Direct Optimization, data transfer links can be created between
the Optimization component and any system or component containing design points. Several GDO
systems can be inserted in the project, which is useful if several hypotheses are to be analyzed.

Once a GDO system has been introduced, the optimization study needs to be defined, which includes
choosing the optimization method, setting the objectives and constraints, and specifying the domain.
Then the optimization problem can be solved. In many cases, there will not be a unique solution to the
optimization problem, and several candidates will be identified. The results of the optimization process
is also very likely to provide candidates that cannot be manufactured (a radius of 3.14523 mm is probably
hard to achieve!). But since all information about the variability of the output parameters is provided
by the source of design point data, whether a response surface or another DesignXplorer component,
it becomes easy to find a design candidate close to the one indicated by the optimization process that
will be acceptable.

As a good practice, it is also recommended to check the accuracy of the response surface for the design
candidates. To do so, the candidate should be verified by a design point update, so as to check the
validity of the output parameters.

Assess the Robustness of the Design Candidates


Once one or several design points have been identified, the probabilistic analysis will help quantify the
reliability or quality of the product by means of a statistical analysis. Probabilistic analysis typically involves
four areas of statistical variability: geometric shape, material properties, loading, and boundary conditions.
For example, the statistical variability of the geometry of a product would try to capture product-to-
product differences due to manufacturing imperfections quantified by manufacturing tolerances.
Probabilistic characterization provides a probability of success or failure and not just a simple yes/no
evaluation. For instance, a probabilistic analysis could determine that one part in 1 million would fail,
or the probability of a product surviving its expected useful life.

Inserting a Six-Sigma system in the schematic will provide the robustness analysis. The process here is
very similar to the response surface creation. The main difference is in the definition of the parameters:
instead of giving minimum and maximum values defining the parameter's ranges, the parameters will

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
4 of ANSYS, Inc. and its subsidiaries and affiliates.
ANSYS DesignXplorer Licensing Requirements

be described in terms of statistical curves and their associated parameters. Once the parameters are
defined, a new DOE will be computed, as well as the corresponding response surfaces. Additional results
in the form of probabilities and statistical distributions of the output parameters are provided, along
with the sensitivity charts and the response surfaces.

Determine the Number of Parameters


To get accurate response surfaces within a reasonable amount of time, the number of input parameters
should be limited to 10 to 15. If more parameters need to be investigated, the correlation matrix will
provide a way to identify some key parameters before creating the response surface. The Parameters
Correlation systems should be used prior to the Response Surface systems, to reduce the number of
parameters to the above mentioned limit. The Parameters Correlation method will perform simulations
based on a random sampling of the design space, so as to identify the correlation between all parameters.
The number of simulations will depend upon the number of parameters, as well as the convergence
criteria for the means and standard deviations of the parameters. The user can provide a hard limit for
the number of points to be computedit should be noted that the accuracy of the correlation matrix
might be affected if not enough points are computed.

Increase the Response Surface Accuracy


Usually, a richer set of points for the DOE will provide a more accurate response surface. However, it
may not be easy to know before the response surface is created how many additional points are required.

The Kriging response surface is a first solution to that problem. It will allow DesignXplorer to determine
the accuracy of the response surface as well as the points that would be required to increase this accur-
acy. To use this feature, the Response Surface Type should be set to Kriging. The refinement options
will then be available to the user. The refinement can be automated or user-defined. This last option
should be used to control the number of points to be computed.

The Sparse Grid response surface is another solution. It is an adaptive meta-model driven by the accuracy
that you request. It automatically refines the matrix of design points where the gradient of the output
parameters is higher in order to increase the accuracy of the response surface. To use this feature, the
Response Surface Type should be set to Sparse Grid and the Design of Experiments Type should be set
to Sparse Grid Initialization.

There is also a manual refinement option for all of the response surface types but the Sparse Grid, which
allows the user to enter specific points into the set of existing design points used to calculate the re-
sponse surface.

Limitations
Suppressed properties are not available for the Design of Experiments (DOE) method.

When you use an external optimizer for Goal Driven Optimization, unsupported properties and function-
ality are not displayed in the DesignXplorer interface.

ANSYS DesignXplorer Licensing Requirements


In order to run any of the ANSYS DesignXplorer systems, you must have an ANSYS DesignXplorer license.
This license must be available when you Preview or Update a DesignXplorer component or system and
also as soon as results need to be generated from the response surface.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 5
ANSYS DesignXplorer Overview

You must also have licenses available for the systems that will be solving the design points generated
by the DesignXplorer systems (Design of Experiments, Parameters Correlation, and Response Surface if
a refinement is defined). The licenses for the solvers must be available when the user Updates the cells
that generate design points.

The ANSYS DesignXplorer license will be released when:

all DesignXplorer systems are deleted from the schematic

the user uses the New or Open command to switch to a project without any DesignXplorer systems

the user exits Workbench

Note

If you do not have an ANSYS DesignXplorer license, you can successfully resume an existing
design exploration project and review the already generated results, with some interaction
possible on charts. But as soon as an ANSYS DesignXplorer Update or a response surface eval-
uation is needed, a license is required and you get an error dialog.

In order to use the ANSYS Simultaneous Design Point Update capability you must have one
solver license for each simultaneous solve. Be aware that the number of design points that can
be solved simultaneously may be limited by hardware, by your RSM configuration, or by available
solver licenses.

When opening a project created prior to release 13.0, an ANSYS DesignXplorer license is initially
required to generate the parametric results that were not persisted before. Once the project
is saved, a license is no longer required to reopen the project.

The User Interface


The Workbench user interface allows you to easily build your project on a main project workspace called
the Project tab. In the Project tab, you can take DesignXplorer systems from the Toolbox and add
them to the Project Schematic. DesignXplorer systems allow you to perform the different types of
parametric analyses on your project, as described in the previous section. Once youve added the systems
to your Project Schematic, you edit the cells in the systems to open the tabs for those cells.

Project Schematic View


You will initially create your project containing systems and models in the Project Schematic. Then
you can choose which of the parameters in those systems will be exposed to the DesignXplorer systems
through the Parameter Set bar. Once the systems and Parameter Set are defined, you can add the
DesignXplorer systems that you need. A Project Schematic containing DesignXplorer systems would
look something like the following:

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
6 of ANSYS, Inc. and its subsidiaries and affiliates.
The User Interface

Toolbox
When you are viewing the Project Schematic, the following system templates are available from the
DesignXplorer Toolbox.

To perform a particular type of analysis, drag the template from the Toolbox onto your Project
Schematic below the Parameter Set bar.

When you are viewing a tab, the Toolbox will display items that can be added to the Outline view
based on the currently selected cell. For example, if you are in a Response Surface tab and you select
a Response Point cell in the Outline view, the Toolbox will contain chart templates that you can add
to that response point. Double-click on template or drag it onto the selected cell in the Outline view
to add it to the Outline. Or you can insert chart using the appropriate right-click menu option when
selecting the Charts folder or a Response Point cell in the Outline view.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 7
ANSYS DesignXplorer Overview

Component Tabs
Once a DesignXplorer system is added to the Project Schematic, you can open component tabs via
the cells in the system. Each tab is a workspace in which you can set up analysis options, run the ana-
lysis, and view results. For example, if you right-click on the Design of Experiments cell and select
Edit, the corresponding Design of Experiments tab will open, showing the following configuration:

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
8 of ANSYS, Inc. and its subsidiaries and affiliates.
The User Interface

You will see this window configuration for any DesignXplorer system component that you edit. The
four views visible are:

Outline: provides a hierarchy of the main objects that make up the component that you are editing.

The state icon on the root node, also referred to as the model node, tells you if the data is up-to-date
or needs to be updated. It also helps you to figure out of the impact of your changes on the compon-
ent and parameter properties.

A quick help message is associated with the various states. It is particularly useful when the state is
Attention Required ( ) because it explains what is invalid in the current configuration. To display
the quick help message, click the Information button in the Help column on the right of the model
node

On results objects nodes (response points, charts, Min-Max search, etc.), a state icon showing if the
object is up-to-date or out of date helps you to understand the current status of the object. If you
change a Design of Experiments setting, the state icon of the corresponding chart will be updated,
given the pending changes. For the update failed state ( ) on a result object, you can try to update
this object using the appropriate right mouse button menu option when selecting its node.

Table: provides a tabular view of the data associated with that component. The header bar contains a
description of the table definition. Right-click on a cell of the table to export the data table in a CSV file
(Comma-Separated Values).

In the Table view, input parameter values and output parameter values that are obtained from a
simulation are displayed in black text. Output parameters that are based on a response surface are
displayed in a different color, as specified in the Options dialog. For details, see Response Surface
Options (p. 23).

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 9
ANSYS DesignXplorer Overview

Properties: provides access to the properties of the object selected in the Outline view. For example,
when a parameter is selected in the Outline, you can set its bounds. When the Response Surface node
is selected, you can set its meta-model type and the associated options. When a chart is selected in the
Outline, you can set its plotting properties, etc.

The Properties view displays output values for the charts or response points selected in the Outline
view. Input parameter values and output parameter values that are obtained from a simulation are
displayed in black text, while output parameters that are based on a response surface are displayed
in a different color, as specified in the Options dialog. For details, see Response Surface Options (p. 23).

Note

In the Properties view, the coloring convention for output parameter values is not applied
to the Calculated Minimum and Calculated Maximum values; these values are always
displayed in black text.

Chart or Results: displays the various charts available for the different DesignXplorer system cells. Right-
click on the chart to export the data chart in a CSV (comma-separated values) file. This section is labeled
Results only for the Optimization tab of a Goal Driven Optimization study.

Note that you can insert and duplicate charts (or a response point with charts for the Response
Surface cell) even if the system is out of date. If the system is out of date, the charts are displayed
and updated in the Chart view when the system is updated. For any DesignXplorer cell where a chart
is inserted before updating the system, all types of charts supported by the cell are inserted by default
at the end of the update. If a cell already contains a chart, then there is no new chart inserted by
default (And for the Response Surface, if there is no response point, a response point is inserted by
default with all charts). For more information, see Using DesignXplorer Charts (p. 16).

In general, you select a cell in the Outline view and either set up its Properties or review the Chart
and/or Table associated with that cell.

Context Menu
When you right-click on a DesignXplorer component in the Project Schematic or on the component
node in the Outline view when editing a component, the right-click context menu provides the following
options relevant to the component and its state: Update, Preview, Clear Generated Data, and Refresh.
These operations are performed only on the selected component.

Parameters
The following types of parameters are used in DesignXplorer:

Input Parameters (p. 11)

Output Parameters (p. 12)

See Working with Parameters and Design Points for more information about parameters.

See Tree Usage in Parameters and Parameter Set Tabs (p. 12) for more information about grouping
parameters by application.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
10 of ANSYS, Inc. and its subsidiaries and affiliates.
Parameters

Input Parameters
Input parameters are those parameters that define the inputs to the analysis for the model under in-
vestigation. Input parameters have predefined ranges which can be changed. These include (and are
not limited to) CAD parameters, Analysis parameters, DesignModeler parameters and Mesh parameters.
CAD and DesignModeler input parameters may include length, radius, etc.; Analysis input parameters
may include pressure, material properties, materials, sheet thickness, etc.; Mesh parameters may include
relevance, number of prism layers or mesh size on an entity.

When you start a Design of Experiments analysis in ANSYS DesignXplorer, a default of +/- 10% of the
current value of each input parameter is used to initialize the parameter range. If any parameter has a
current value of 0.0, then the initial parameter range will be computed as 0.0 10.0. Since DesignXplorer
is not aware of the physical limits of parameters, you will need to check that the assigned range is
compatible with the physical limits of the parameter. Ideally, the current value of the parameter will be
at the midpoint of the range between the upper and lower bound, but this is not a requirement. For
details on defining the range for input parameters, see Defining Continuous Input Parameters (p. 188)
and Defining the Range for Manufacturable Values (p. 189).

When you define a range for an input parameter, the relative variation must be equal to or greater than
1e-10 in its current unit. If the relative variation is less than 1e-10, you can either adjust the variation
range or disable the parameter.

If you disable an input parameter, its initial value - which becomes editable - will be used for the design
exploration study. If you change the initial value of a disabled input parameter during the study, all
depending results are invalidated. A disabled input parameter can have different initial value in each
DesignXplorer system.

Input parameters can be discrete or continuous, and each of these have specific forms.

Parameter Type Description Example


Discrete Parameters Valid only at integer values Number of holes, number of
weld points, number of
prism layers
Continuous Parameters Continuous non-interrupted Thickness, force, temperat-
range of values ure

Discrete parameters physically represent different configurations or states of the model. An example is
the number of holes in a geometry. Discrete parameters allow you to analyze different design variations
in the same parametric study without having to create multiple models for parallel parametric analysis.
For more information, see Defining Discrete Input Parameters (p. 187).

Continuous parameters physically vary in a continuous manner between a lower and an upper bound
defined by the user. Examples are CAD dimension or load magnitude. Continuous parameters allow
you to analyze a continuous value within a defined range, with each parameter representing a direction
in the design and treated as a continuous function in the DOE and Response Surface. For more inform-
ation, see Defining Continuous Input Parameters (p. 188).

With continuous parameters, you can also apply a Manufacturable Values filter to the parameter. Manu-
facturable Values represent real-world manufacturing or production constraints, such as drill bit sizes,
sheet metal thicknesses, and readily-available bolts or screws. By applying the Manufacturable Values
filter to a continuous parameter, you ensure that only those parameters that realistically represent
manufacturing capabilities are included in the postprocessing analysis. For more information, see Defining
Manufacturable Values (p. 188).

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 11
ANSYS DesignXplorer Overview

Output Parameters
Output parameters are those parameters that result from the geometry or are the response outputs
from the analysis. These include (and are not limited to) volume, mass, frequency, stress, heat flux,
number of elements, and so forth.

Derived Parameters
Derived parameters are specific output parameters defined as an analytical combination of output or
input parameters. As the definition suggests, derived parameters are calculated from other parameters
by using equations that you provide. They are created in the system and passed into ANSYS DesignXplorer
as output parameters. See Working with Parameters and Design Points for more information.

Tree Usage in Parameters and Parameter Set Tabs


A Workbench project can contain parameters defined in different systems and parameters defined in
the Parameters tab and/or the Parameter Set tab. You can review the parameters of a given system
via the Parameters cell appearing at the bottom of the system or the Parameter Set bar. To open the
corresponding tab, right-click and select Edit.

When editing the Parameter Set bar, all of the parameters of the project are listed in the Outline, under
the Input Parameters and Output Parameters nodes, depending on their nature.

The parameters are also grouped by system name to reflect the origin of the parameters and the
structure of the Project Schematic when working in parametric environments. Parameters are manip-
ulated in the Parameters and Parameter Set and in the DesignXplorer tabs, so the same tree structure
is used in all the tabs.

Note

Edit the system name by right-clicking the system name on the Project Schematic.

Design Points
A design point is defined by a snapshot of parameter values where output parameter values were cal-
culated directly by a project update. Design points are created by design exploration; for instance, when
processing a Design of Experiments or a Correlation Matrix, or refining a Response Surface.

It is also possible to insert a design point at the project level from an optimization candidate design,
in order to perform a validation update. Note that the output parameter values are not copied to the
created design point since they were calculated by design exploration and are, by definition, approxim-
ated. Actual output parameters will be calculated from the design point input parameters when a project
update is done. See Working with Design Points (p. 193) for more information on adding design points
to the project.

You can also edit design point update process settings, including the order in which points are updated
and the location in which the update will occur. When submitting design points for update, you can
specify whether the update will be run locally on your machine or sent via RSM for remote processing.
For more information, see Working with Design Points (p. 193).

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
12 of ANSYS, Inc. and its subsidiaries and affiliates.
Workflow

Response Points
A response point is defined by a snapshot of parameter values where output parameter values were
calculated in ANSYS DesignXplorer from a response surface. As such, the parameter values are approx-
imate and calculated from response surfaces, and the most promising designs should be verified by a
solve in the system using the same parameter values.

New response points can be created when editing a Response Surface cell from the Outline view or
from the Table view. Response points as well as design points can also be inserted from the Table view
or the charts, using the appropriate right-click menu option when selecting a table row or a point on
a chart. For instance, it is possible to insert a new response point from a point selected with the mouse
on a Response Chart. See the description of the Response Point Table for more information on creating
custom response points.

It is possible to duplicate a response point by selecting it from the Outline view and either choosing
Duplicate in the contextual menu or using the drag and drop mechanism. This operation will attempt
an update of the Response point so that the duplication of an up-to-date response point results in an
up-to-date response point.

Note

All the charts attached to a response point are also duplicated.

Duplicating a chart which is a child of a response point by using the contextual menu operation
creates a new chart under the same response point. But by using the drag and drop mechanism,
the user can create a duplicate under another response point.

Workflow
To run a parametric analysis in ANSYS DesignXplorer, you will need to:

Create your model (either in DesignModeler or other CAD software) and load it into one of the analysis
systems available in Workbench.

Select the parameters you want to use.

Add to the project the DesignXplorer features that you want to use.

Set the analysis options and parameter limits.

Update the analysis.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 13
ANSYS DesignXplorer Overview

View the results of the design exploration analysis.

Generate a project report to display the results of your analysis.

Adding Design Exploration Templates to the Schematic


You can use as many of the ANSYS DesignXplorer features as you like and rerun them with different
parameters and limits as many times as you need to refine your design. The following steps outline the
process for putting design exploration elements into your Project Schematic:

1. Create parts and assemblies in either the DesignModeler solid modeler or any of the supported CAD
systems. Features in the geometry that will be important to the analysis should be exposed as parameters.
Such parameters can be passed to ANSYS DesignXplorer.

2. Drag an analysis system into the Project Schematic, connecting it to the DesignModeler or CAD file.

3. Click on the Parameter Set bar

4. In the Outline view, click on the various input parameters that you need to modify. Set the limits of the
selected parameter in the Properties view.

5. Drag the design exploration template that you would like to use into the Project Schematic below the
Parameter Set bar.

Duplicating Existing DesignXplorer Systems


You can duplicate any DesignXplorer systems that you have created on your Project Schematic. The
manner in which you duplicate them dictates the data that is shared between the duplicated systems.

For any DesignXplorer system, right-click on the B1 cell of the system and select Duplicate. A new system
of that type will be added to the schematic under the Parameter Set bar. No data is shared with the
original system.

For a DesignXplorer system that has a DOE cell, click on the DOE cell and select Duplicate. A new system
of that type will be added to the schematic under the Parameter Set bar. No data is shared with the
original system.

For a DesignXplorer system that has a Response Surface cell, click on the Response Surface cell and
select Duplicate. A new system of that type will be added to the schematic under the Parameter Set
bar. The DOE data will be shared from the original system.

For a GDO or SSA DesignXplorer system, if you click on those cells and select Duplicate, a new system
of that type will be added to the schematic under the Parameter Set bar. The DOE and Response Surface
data will be shared from the original system.

Any cell in the duplicated DesignXplorer system that contains data that is not shared from the original
system will be marked as Update Required.

While duplicating a DesignXplorer system, the definition of the users data (i.e. charts, responses points
and metric objects) are also duplicated. An Update operation is then needed to calculate the results
for the duplicated data.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
14 of ANSYS, Inc. and its subsidiaries and affiliates.
Workflow

Running Design Exploration Analyses


After you have placed all of the design exploration templates on your schematic you should set up the
individual DesignXplorer systems and then run the analysis for each cell.

1. Specify design point update options in the Parameter Set Properties view. Note that these can vary from
the global settings specified in the Tools > Options > Solution Process dialog.

2. For each cell in the design exploration system, right-click and select Edit to open the tab for that cell.
On the tab, set up any analysis options that are needed. These can include parameter limits, optimization
objectives or constraints, optimization type, Six Sigma Analysis parameter distributions, etc.

3. Run the analysis by one of the following methods:

In the tab, right-click the cell and select Update from the context menu.

From the Project Schematic, right-click a system cell and select Update to update the cell. To update
the entire project, you can either click the Update Project toolbar button or right-click in the schem-
atic and select Update Project.

4. Make sure that you set up and solve each cell in a DesignXplorer system to complete the analysis for
that system.

5. View the results of each analysis from the tabs of the various cells in the DesignXplorer systems (charts,
tables, statistics, etc.).

When you are in the Project Schematic view, cells in the various systems contain icons to indicate their
state. If they are out of date and need an update, you can right-click on the cell and select Update
from the menu.

Monitoring Design Exploration Analyses


After starting a design exploration analysis Update, there are two different views available to help you
monitor your analysis.

Progress View
There is a Progress view that you can open from the View menu or from Show Progress button at
the lower right corner of the window. During execution, a progress execution status bar appears in the
Progress cell. The name of the task that is executing appears in the Status cell and information about
the execution appears in the Details cell. This view continuously reports the status of the execution,
until it is complete. You can stop an execution by clicking the Stop button to the right of the progress
bar and you can restart it at a later time by using any of the Update methods that you would use to
start a new Update for the cell or tab.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 15
ANSYS DesignXplorer Overview

Messages View
If solution errors exist and the Messages view is not open, the Show Messages button in the lower
right of the window will flash orange and indicate the number of messages. Click this button to examine
solution error information. You can also open the Messages view from the View menu in the toolbar.

Reporting on the Design Exploration Analysis


You can generate a project report to summarize and display the results of your design exploration
analysis. A project report provides a visual snapshot of the project at given point in time. The contents
and organization of the report reflect the layout of the Project Schematic.

For more information, see Working with Design Exploration Project Reports (p. 210).

Using DesignXplorer Charts


DesignXplorer charts are visual tools that can help you to better understand your design exploration
study. A variety of charts is available for each DesignXplorer component and can be viewed in the
Charts view of the component tab.

All of the charts available for the component are listed in the tabs Toolbox. When you update a com-
ponent for the first time, one of each chart available for that component is automatically created and
inserted in the Outline view. (For the Response Surface component, a new response point is automat-
ically inserted, as well). If a component already contains charts, the charts are replaced created with
the next update.

Note

Charts are inserted under a parent cell on the Outline view, and most charts are created
under the Charts cell. Charts for the Response Surface component are an exception; the
Predicated vs. Observed chart is inserted under the Goodness of Fit cell, while the rest are
inserted under a Response Point cell.

Adding or Duplicating Charts


You can insert and duplicate charts even if the system is out of date. If the system is out of date, the
charts are displayed and updated in the Chart view when the system is updated.

Insert a new chart by either of the following methods:

Drag a chart template from the Toolbox and drop it on the Outline view.

In the Outline view, right-click on cell under which the chart will be added and select the Insert option
for the type of chart.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
16 of ANSYS, Inc. and its subsidiaries and affiliates.
Using DesignXplorer Charts

Duplicate a chart by either of the following methods:

To create a second instance of a chart (with default settings) or to create a new Response Surface chart
under a different response point, drag the desired chart template from the Toolbox and drop it on the
parent cell.

To create an exact copy of an existing chart, right-click on the chart and select Duplicate from the context
menu.

For Response Surface charts, the Duplicate option will create an exact copy of the existing chart under
the same response point. To create a fresh instance of the chart type under a different response point,
use the drag-and-drop operation. To create an exact copy of under a different response point, drag the
existing chart and drop it on the new response point

Chart duplication triggers a chart update. If the update succeeds, both the original chart and the duplicate
will be up-to-date.

Working with Charts


When you select a chart in the Outline view, its properties display below in the Properties view and
the chart itself displays it in the Chart view.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 17
ANSYS DesignXplorer Overview

In the Chart view, you can drag the mouse over various chart elements view coordinates and other
element details.

You can change chart properties manually on the Properties view. You can also use the context-menu
options, accessed by right-clicking directory on the chart. Available options vary according to the type
and state of the chart and allow you to perform various chart-related operations.

To edit generic chart properties, right-click on a chart or chart element and select Edit Properties. For
more information, see Setting Chart Properties in the Workbench User's Guide.

To add new points to your design, right-click on a chart point. Depending on the chart type, you can select
from the following context options: Explore Response Surface at Point, Insert as Design Point, Insert
as Refinement Point, Insert as Verification Point, or Insert as Custom Candidate Point.

On charts that allow you to enable or disable parameters, you can:

Right-click on a chart parameter and select (depending on the parameter selected) Disable <paramet-
erID>, Disable all Input Parameters but <parameterID>, or Disable all Output Parameters but
<parameterID>.

If at least one parameter is already disabled, you can right-click anywhere on the chart and select Reverse
Enable/Disable to enable all disabled parameters or versa.

For general information on working with charts, see Working with the Chart View in the Workbench
User's Guide.

Exporting and Importing Data


You can export the design point values of a selected DesignXplorer table or chart to an ASCII file, which
then can be used by other programs for further processing. This file is primarily formatted according
to the DesignXplorers Extended CSV File Format, which is the Comma-Separated Values standard (file
extension CSV) with the addition of several nonstandard formatting conventions. For details, see Extended
CSV File Format (p. 281).

To export design point values to an ASCII file, right-click on a cell of a table in the Table view or right-
click on a chart in the Chart view and select the Export Data menu option. The parameter values for
each design point in the table will be exported to a CSV file. The values are always exported in units
as defined in Workbench (i.e., as when the Display Values as Defined option is selected under the
Units menu).

You can also import an external CSV file to create design points in a custom Design of Experiments
component, or refinement and verification points in a Response Surface component. Right-click on a
cell in the Table view, or a Design of Experiments or Response Surface component in the Project
Schematic view and select the appropriate Import option from the context menu.

See Working with Tables (p. 205) for more information.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
18 of ANSYS, Inc. and its subsidiaries and affiliates.
Design Exploration Options

Design Exploration Options


You can set the default behavior of design exploration features through the Tools>Options dialog box.
These options will be used as the default values for the analyses that you set up on the Project
Schematic of your Project tab. Each system has Properties views that allow you to change the default
values for any specific cell in an analysis. Changing the options in a cell's Properties view does not affect
the global default options described here in the Options dialog box. To access DesignXplorers default
options:

1. From the main menu, choose Tools>Options. An Options dialog box appears. Expand the Design Ex-
ploration item in the tree.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 19
ANSYS DesignXplorer Overview

2. Click on Design Exploration or one of its sub-options.

3. Change any of the option settings by clicking directly in the option field on the right. You may need to
scroll down to see all of the fields. You will see a visual indication for the kind of interaction required in
the field (examples are drop-down menus, and direct text entries). Some sections of the screens may be
grayed out unless a particular option above them is selected.

4. Click OK when you are finished changing the default settings.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
20 of ANSYS, Inc. and its subsidiaries and affiliates.
Design Exploration Options

The following options appear in the Options dialog box:

Design Exploration
Design of Experiments
Response Surface
Sampling and Optimization

There are many general Workbench options in the Options dialog box. You can find more information
about them in Setting ANSYS Workbench Options.

Design Exploration Options


Show Advanced Options: If selected, advanced properties are shown for DesignXplorer components.
Advanced properties are displayed in italics.

The Design Points category includes:

Preserve Design Points After DX Run: If checked, after the solution of a design exploration cell completes,
save the design points created for the solution to the project Table of Design Points. If this option is
selected, the following option is available:

Export Project for Each Preserved Design Point: If selected, in addition to saving the design points
to the project Table of Design Points, exports each of the design exploration design points as a sep-
arate project in the same directory as the original project.

Retry All Failed Design Points: If checked, DX will make additional attempts to solve all of the design
points that failed to update during the first run. If this option is selected, the following options are available:

Number of Retries: Allows you to specify the number of times DX will try to update the failed design
points. Defaults to 5.

Retry Delay (seconds): Allows you to specify the number of seconds that will elapse between tries.
Defaults to 120.

The Graph category includes:

Chart Resolution: Changes the number of points used by the individual continuous input parameter axes
in the 2D/3D Response Surface charts. Increasing the number of points enhances the viewing resolution
of these charts. The range is from 2 to 100. The default is 25.

The Sensitivity category includes:

Significance Level: Controls the relative importance or significance of input variables. The allowable range
is from 0.0, meaning all input variables are "assumed" to be insignificant, to 1.0, meaning all input variables
are "assumed" to be significant. The default is 0.025.

Correlation Coefficient Calculation Type: Specifies the calculation method for determining sensitivity
correlation coefficients. The following choices are available:

Rank Order (Spearman) (default): Correlation coefficients are evaluated based on the rank of samples.

Linear (Pearson): Correlation coefficients are evaluated based on the sample values.

The Parameter Options category includes:

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 21
ANSYS DesignXplorer Overview

Display Parameter Full Name:

If checked (default), displays the full parameter name (i.e. P1 Thickness) wherever possible (depending
on the available space in the interface).

If unchecked, displays the short parameter name (i.e. P1) in the design exploration user interface, except
in the Outline view and in tables where parameters are listed vertically.

Parameter Naming Convention: Sets the naming style of input parameters within design exploration.
The following choices are available:

Taguchi Style: Names the parameters as Continuous Variables and Noise Variables.

Uncertainty Management Style (default): Names the parameters as Design Variables and Uncertainty
Variables.

Reliability Based Optimization Style: Names the parameters as Design Variables and Random Variables.

The Messages category includes:

Confirm if Min-Max Search can take a long time: Allows you to specify whether you should be alerted,
before performing a Min-Max Search when there are discrete or Manufacturable Values input parameters,
that the search may be very time-consuming. When selected (default), an alert will always be displayed
before a Min-Max Search including discrete or Manufacturable values parameters. When deselected, the
alert will not be displayed.

Alert if Design Point Update Order is different from the recommended setting: Allows you to specify
whether you should be alerted, before a design point update operation, that the design point update
order has been changed from the recommended setting. When selected (default) and the recommended
design point update order has been modified, an alert will always be displayed before the design points
are updated. When deselected, the alert will not be displayed.

Design of Experiments Options


The Design of Experiments Type category includes the following settings. Descriptions of each setting
are included in the Working with Design Points (p. 193) section under "DesignXplorer Systems and
Components" (p. 27).

Central Composite Design (default): Note that changing the CCD type in the Options dialog box will
generate new design points, provided the study has not yet been solved.

Optimal Space-Filling Design

Box-Behnken Design

Latin Hypercube Sampling Design

When Central Composite Design is selected, the following options are available in the Central Composite
Design Options section:

Design Type: The following options are available:

Face-Centered

Rotatable

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
22 of ANSYS, Inc. and its subsidiaries and affiliates.
Design Exploration Options

VIF-Optimality

G-Optimality

Auto-Defined

Template Type: Enabled for the Rotatable and Face-Centered design types. The following options are
available:

Standard

Enhanced

When Optimal Space-Filling Design or Latin Hypercube Sampling Design is selected, the following
options are available in the Latin Hypercube Sampling or Optimal Space-Filling section:

Design Type: The following choices are available:

Max-Min Distance (default)

Centered L2

Maximum Entropy

Maximum Number of Cycles: Indicates the maximum number of iterations which the base DOE undergoes
for the final sample locations to conform to the chosen DOE type.

Samples Type: This indicates the specific method chosen to determine the number of samples. The fol-
lowing options are available:

CCD Samples (default): Number of samples is the same as that of a corresponding CCD design.

Linear Model Samples: Number of samples is the same as that of a design of linear resolution.

Pure Quadratic Model Samples: Number of samples is the same as that of a design of pure quadratic
(constant and quadratic terms) resolution.

Full Quadratic Model Samples: Number of samples is the same as that of a design of full quadratic
(all constant, quadratic and linear terms) resolution.

User-Defined Samples: User determines the number of DOE samples to be generated.

For details on the different types of samples, see Optimal Space-Filling Design (OSF) (p. 65) and Latin
Hypercube Sampling Design (LHS) (p. 68).

Number of Samples: Specifies the default number of samples. Defaults to 10.

Number of Samples: For a Sample Type of User Defined Samples, specifies the default number of
samples.

Response Surface Options


The Type setting includes the following options:

Standard Response Surface - Full 2nd Order Polynomial (default):

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 23
ANSYS DesignXplorer Overview

Kriging: Accurate interpolation method. If selected, the following Kriging Options are available:

Kernel Variation Type, which can be set to:

Variable Kernel Variation (default): Pure Kriging mode.

Constant Kernel Variation: Radial Basis Function mode.

Non-Parametric Regression: Provides improved response quality. Initialized with one of the available
DOE types.

Neural Network: Nonlinear statistical approximation method inspired from biological neural processing.
If selected, the following Neural Network Option is available:

Number of Cells: The number of cells controls the quality of the meta-model. The higher it is, the
better the network can capture parameters interactions. The recommended range is from 1 to 10.
The default is 3.

Once a meta-model is solved, it is possible to switch to another meta-model type or change the options
for the current meta-model in the Properties view of theResponse Surface tab. After changing the
options in the Properties view, you must Update the Response Surface to obtain the new fitting.

The Color for Response Surface Based Output Values setting allows you to select the color used to
display output values that are calculated from a response surface. (Simulation output values that have
been calculated from a design point update are displayed in black.) The color you select here will be
applied to response surface based output values in the Properties and Table views of all components
and in the Results view of the Optimization component (specifically, for the Candidate Points chart
and the Samples chart). For example:

The DOE table, derived parameters with no output parameter dependency, verified candidate points, and
all output parameters calculated in a Direct Optimization system are simulation-based, and so are displayed
in black text.

The table of response points, the Min/Max search table, and candidate points in a Response Surface Op-
timization system are based on response surfaces, and so are displayed in the color specified by this option.

In a Direct Optimization, derived and direct output parameters are all calculated from a simulation and
so are all displayed in black. In a Response Surface Optimization, the color used for derived values de-
pends on the definition (expression) of the derived parameter. If the expression of the parameter depends
on at least one output parameter, either directly or indirectly, the derived values are considered to be
based on a response surface and so will be displayed in the color specified by this option.

Sampling and Optimization Options


The Random Number Generation category includes:

Repeatability: When checked, seeds the random number generator to the same value each time you
generate uncertainty analysis samples. With Repeatability unchecked , the random number generator is
seeded with the system time every time you generate samples. This applies to all methods in design ex-
ploration where random numbers are needed, as in Six Sigma Analysis or Goal Driven Optimization. The
default is checked.

The Weighted Latin Hypercube category includes:

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
24 of ANSYS, Inc. and its subsidiaries and affiliates.
Design Exploration Options

Sampling Magnification: To indicate number of times used to reduce regular Latin Hypercube
samples while achieving a certain probability of failure (Pf ). For example, the lowest probability of
failure for 1000 Latin Hypercube samples is approximately 1/1000; magnification of 5 is meant to use
200 weighted/biased Latin Hypercube samples to approach the lowest probability of 1/1000. It is re-
commended not to use magnification greater than 5. A greater magnification may result in significant
Pf error due to highly biased samples. The default is 5.

The Optimization category includes:

Constraint Handling

This option can be used for any optimization application and is best thought of as a "constraint sat-
isfaction" filter on samples generated from the Screening, MOGA, NLPQL, or Adaptive Single-Objective
runs. This is especially useful for Screening samples to detect the edges of solution feasibility for
highly constrained nonlinear optimization problems. The following choices are available:

Relaxed: Implies that the upper, lower, and equality constrained objectives of the candidate points
shown in the Table of Optimization are treated as objectives; thus any violation of the objectives is
still considered feasible.

Strict (default): When chosen, the upper, lower, and equality constraints are treated as hard constraints;
that is, if any of them are violated, then the candidate is no longer displayed. So, in some cases no
candidate points may be displayed, depending on the extent of constraint violation.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 25
Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
26 of ANSYS, Inc. and its subsidiaries and affiliates.
DesignXplorer Systems and Components
The topics in this section provide an introduction to using specific DesignXplorer systems and their in-
dividual components for your design exploration projects.
What is Design Exploration?
DesignXplorer Systems
DesignXplorer Components

What is Design Exploration?


Design Exploration is a powerful approach used by DesignXplorer for designing and understanding the
analysis response of parts and assemblies. It uses a deterministic method based on Design of Experiments
(DOE) and various optimization methods, with parameters as its fundamental components. These
parameters can come from any supported analysis system, DesignModeler, and various CAD systems.
Responses can be studied, quantified, and graphed. Using a Goal Driven Optimization method, the de-
terministic method can obtain a multiplicity of design points. You can also explore the calculated response
surface and generate design points directly from the surface.

After setting up your analysis, you can pick one of the system templates from the Design Exploration
toolbox to:

parameterize your solution and view an interpolated response surface for the parameter ranges

view the parameters associated with the minimum and maximum values of your outputs

create a correlation matrix that shows you the sensitivity of outputs to changes in your input parameters

set output objectives and see what input parameters will meet those objectives

perform a Six Sigma Analysis on your model

DesignXplorer Systems
The following DesignXplorer systems will be available if you have installed the ANSYS DesignXplorer
product and have an appropriate license:
Parameters Correlation System
Response Surface System
Goal Driven Optimization Systems
Six Sigma Analysis System

Parameters Correlation System


You can use the Parameters Correlation system to identify the major input parameters. This is achieved
by analyzing their correlation and the relative weight of the input parameters for each output parameter.

Indeed, when your project has many input parameters (more than 10 or 15), building an accurate re-
sponse surface becomes an expensive process. So it is recommended to identify the major input para-
meters with the Parameters Correlation feature in order to disable the minor input parameters when

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 27
DesignXplorer Systems and Components

building the response surface. With less input parameters, the response surface will be more accurate
and less expensive to build.

The Parameters Correlation system contains the following component:

Parameters Correlation

Response Surface System


There is one response surface for every output parameter. Output parameters are represented in terms
of the input parameters, which are treated as independent variables.

For the deterministic method, response surfaces for all output parameters are generated in two steps:

Solving the output parameters for all design points as defined by a Design of Experiments

Fitting the output parameters as a function of the input parameters using regression analysis techniques

The Response Surface system contains the following cells:

Design of Experiments

Response Surface

Goal Driven Optimization Systems


You can use Goal Driven Optimization to state a series of design goals, which will be used to generate
optimized designs. Both objectives and constraints can be defined and assigned to each output para-
meter. Goals may be weighted in terms of importance.

There are two types of Goal Driven Optimization systems: Response Surface Optimization and Direct
Optimization.

The Response Surface Optimization GDO system contains the following components:

Design of Experiments

Response Surface

Optimization

The Direct Optimization GDO system contains the following component:

Optimization

Six Sigma Analysis System


Six Sigma Analysis (SSA) is an analysis technique for assessing the effect of uncertain input parameters
and assumptions on your model.

A Six Sigma Analysis allows you to determine the extent to which uncertainties in the model affect the
results of an analysis. An "uncertainty" (or random quantity) is a parameter whose value is impossible
to determine at a given point in time (if it is time-dependent) or at a given location (if it is location-
dependent). An example is ambient temperature: you cannot know precisely what the temperature will
be one week from now in a given city.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
28 of ANSYS, Inc. and its subsidiaries and affiliates.
DesignXplorer Components

The Six Sigma Analysis system contains the following components:

Design of Experiments (SSA)

Response Surface (SSA)

Six Sigma Analysis

DesignXplorer Components
DesignXplorer systems are made up of one or more components. Double-clicking on a component in
the Project Schematic opens the tab for that component. The tab will contain the following views:
Outline, Properties, Table, and Chart. The content of the sections varies depending on the object se-
lected in the Outline view. The following topics summarize the tabs of the various components that
make up the design exploration systems.
Design of Experiments Component Reference
Parameters Correlation Component Reference
Response Surface Component Reference
Optimization Component Reference
Six Sigma Analysis Component Reference

Design of Experiments Component Reference


Design of Experiments (DOE) is a technique used to determine the location of sampling points and is
included as part of the Response Surface, Goal Driven Optimization, and Six Sigma Analysis systems.

A Design of Experiments cell is part of the Response Surface, Goal Driven Optimization, and Six Sigma
systems (although the DOE for a Six Sigma Analysis system treats the inputs and outputs differently
and cannot share data with the other types of DOEs). The Design of Experiments tab allows you to
preview (generates the points but does not solve them) or generate and solve a DOE Design Point
matrix. On the DOE tab, you can set the input parameter limits, set the properties of the DOE solve,
view the Table of Design Points, and view several parameter charts.

Related Topics:

"Using Design of Experiments" (p. 63)

Working with Design Points (p. 193)

Design of Experiments Options (p. 22)

The following views in the Design of Experiments tab allow you to customize your DOE and view the
updated results:

Outline: Allows you to:

Select the Design of Experiments cell and change its properties.

Select the input parameters and change their limits.

Select the output parameters and view their minimum and maximum values.

Select one of the available charts for display. You can insert directly into the Outline as many new charts
from the chart toolbox as you want. When a chart is selected, you can change the data properties of the
chart (chart type and parameters displayed).

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 29
DesignXplorer Systems and Components

Properties: (See Setting Up the Design of Experiments (p. 63) for complete descriptions of DOE options.)

Preserve Design Points After DX Run: Select this check box if you want to retain design points at the
project level from each update of this DOE you perform. If this property is set, the following property is
available:

Export Project for Each Preserved Design Point: If selected, in addition to saving the design points
to the project Table of Design Points, exports each of the design exploration design points as a sep-
arate project in the same directory as the original project..

Number of Retries: Indicates the number of times DesignXplorer will try to update the failed design
points. If the Retry All Failed Design Points option is not selected in the Options dialog, defaults to 0.
Otherwise, defaults to the value specified for the Number of Retries option.

Retry Delay (seconds): If the Number of Retries property is not set to 0, indicates how many seconds
will elapse between tries.

Design of Experiments Type

Central Composite Design

Optimal Space-Filling Design

Box-Behnken Design

Sparse Grid Initialization

Custom

Custom + Sampling

Latin Hypercube Sampling Design

Settings for selected Design of Experiments Type, where applicable

Table: Displays a matrix of the design points that populates automatically during the solving of the
points. Displays input and output parameters for each design point. You can add points manually if the
DOE Type is set to Custom.

Chart: Displays the available charts described below.

Related Topics:
Parameters Parallel Chart
Design Points vs Parameter Chart

Parameters Parallel Chart


Generates a graphical display of the DOE matrix using parallel Y axes to represent all of the inputs and
outputs. Select the chart from the Outline to display it in the Chart view. Use the Properties view as
follows:

Properties:

Display Full Parameter Name: Select to display the full parameter name in the results.

Use the Enabled check box to enable or disable the display of parameter axes on the chart.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
30 of ANSYS, Inc. and its subsidiaries and affiliates.
DesignXplorer Components

Click on a line on the chart to display input and output values for that line in the Input Parameters and
Output Parameters sections of the Properties view.

Change various generic chart properties.

The chart plots only the updated design points. If the DOE matrix does not contain any updated design
point yet, the output parameters are automatically disabled from the chart. All design points are plotted;
the axes corresponding to the input parameters being the only one visible.

You can also look at this chart data a different way. If you right-click on the chart background, select
Edit Properties, and then change Chart Type to Spider Chart, the chart which will show all input and
output parameters arranged in a set of radial axes spaced equally. Each DOE point is represented by a
corresponding envelope defined in these radial axes.

Design Points vs Parameter Chart


Generates a graphical display for plotting design points vs. any input or output parameter. Select the
chart cell in the Outline to display it in the Chart view. Use the Properties view as follows:

Properties:

Display Full Parameter Name: Select to display the full parameter name in the results.

X-Axis (top,bottom), Y-Axis (right,left): Design points (X top and bottom axes) can be plotted against
any of the input and output parameters (any of the axes).

Change various generic chart properties.

Parameters Correlation Component Reference


A linear association between parameters is evaluated using Spearmans or Pearsons product-moment
coefficient. A correlation/sensitivity matrix is generated to demonstrate correlation between input and
input parameters, and sensitivity of output to input parameters.

A nonlinear (quadratic) association between parameters is evaluated using coefficient of determination


of quadratic regression between parameters. A determination matrix is generated between the para-
meters to convey information of quadratic correlation if there is any and it isnt detectable with linear
Spearmans or Pearsons correlation coefficient.

Related Topics:

"Using Parameters Correlation" (p. 51)

Working with Design Points (p. 193)

Correlation Coefficient Theory

The following views in the Parameters Correlation tab allow you to customize your search and view
the results:

Outline: allows you to:

Select the Parameters Correlation cell and change its properties and view the number of samples gen-
erated for this correlation.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 31
DesignXplorer Systems and Components

Select the input parameters and change their limits.

Select the output parameters and view their minimum and maximum values.

Select one of the available charts for display. When a chart is selected, you can change the data properties
of the chart (chart type and parameters displayed).

Properties:

Preserve Design Points After DX Run: Select this check box if you want to retain design points at the
project level from this parameters correlation. If this property is set, the following property is available:

Export Project for Each Preserved Design Point: If selected, in addition to saving the design points
to the project Table of Design Points, exports each of the design exploration design points as a sep-
arate project in the same directory as the original project..

Note

There is no way to view the design points created for a parameters correlation other than
preserving them at the project level.

Number of Retries: Indicates the number of times DesignXplorer will try to update the failed design
points. If the Retry All Failed Design Points option is not selected in the Options dialog, defaults to 0.
Otherwise, defaults to the value specified for the Number of Retries option. Available only for correlations
not linked to a response surface.

Retry Delay (seconds): If the Number of Retries property is not set to 0, indicates how much time
will elapse between tries.

Reuse the samples already generated select this check box if you want to reuse the samples generated
in a previous correlation.

Choose Correlation Type algorithm:

Spearman

Pearson

Number of Samples maximum number of samples to generate for this correlation.

Choose Auto Stop Type (p. 53):

Enable Auto Stop

Mean Value Accuracy: Available if Auto Stop is enabled. Set the desired accuracy for the mean value
of the sample set.

Standard Deviation Accuracy: Available if Auto Stop is enabled. Set the desired accuracy for the
standard deviation of the sample set.

Convergence Check Frequency: Available if Auto Stop is enabled. Number of simulations to execute
before checking for convergence.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
32 of ANSYS, Inc. and its subsidiaries and affiliates.
DesignXplorer Components

Size of Generated Sample Set: Available if Auto Stop is enabled. Number of samples generated for
the correlation solution.

Execute All Simulations

Table: Displays both a correlation matrix and a determination matrix for the input and output parameters.

Chart: Displays the available charts described below.

The charts include:


Correlation Scatter Chart
Correlation Matrix Chart
Determination Matrix Chart
Sensitivities Chart
Determination Histogram Chart

Correlation Scatter Chart


Shows a sample scatter plot of a parameter pair. Two trend lines, linear and quadratic curves, are added
to the sample points of the parameter pair. The trend line equations are displayed in the chart legend.
The chart conveys a graphical presentation of the degree of correlation between the parameter pair in
linear and quadratic trends.

You can create a Correlation Scatter chart for a given parameter combination by right-clicking on the
associated cell in the Correlation Matrix chart and selecting Insert <x-axis> vs <y-axis> Correlation
Scatter.

To view the Correlation Scatter chart, select Correlation Scatter under the Chart node of the Outline
view. Use the Properties view as follows:

Properties:

Choose the parameters to display on the X Axis and Y Axis.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 33
DesignXplorer Systems and Components

Enable or disable the display of the Quadratic and Linear trend lines.

View the Coefficient of Determination and the equations for the Quadratic and Linear trend lines.

Change various generic chart properties.

Related Topics:

Viewing the Quadratic Correlation Information (p. 55)

Correlation Matrix Chart


Provides information about the linear correlation between a parameter pair, if any. The degree of cor-
relation is indicated by color in the matrix. Placing your cursor over a particular square will show you
the correlation value for the two parameters associated with that square.

To view the Correlation Matrix chart, select Correlation Matrix under the Chart node of the Outline
view. Use the Properties view as follows:

Properties:

Enable or disable the parameters that are displayed on the chart.

Change various generic chart properties.

Related Topics:

Viewing Significance and Correlation Values (p. 55)

Determination Matrix Chart


Provides information about the nonlinear (quadratic) correlation between a parameter pair, if any. The
degree of correlation is indicated by color in the matrix. Placing your cursor over a particular square
will show you the correlation value for the two parameters associated with that square.

To view the Determination Matrix chart, select Determination Matrix under the Chart node of the
Outline view. Use the Properties view as follows:

Properties:

Enable or disable the parameters that are displayed on the chart.

Change various generic chart properties.

Related Topics:

Viewing the Quadratic Correlation Information (p. 55)

Sensitivities Chart
Allows you to graphically view the global sensitivities of each output parameter with respect to the
input parameters.

To view the Sensitivities chart, select Sensitivities under the Chart node of the Outline view. Use the
Properties view as follows:

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
34 of ANSYS, Inc. and its subsidiaries and affiliates.
DesignXplorer Components

Properties:

Chart Mode: Set to Bar or Pie.

Enable or disable the parameters that are displayed on the chart.

Change various generic chart properties.

Related Topics:

Working with Sensitivities (p. 204)

Viewing Significance and Correlation Values (p. 55)

Determination Histogram Chart


Determination Histogram charts allow you to see the impact of the input parameters for a given output
parameter.

The full model R2 represents the variability of the output parameter that can be explained by a linear
(or quadratic) correlation between the input parameters and the output parameter.

The value of the bars corresponds to the linear (or quadratic) determination coefficient of each input
associated to the selected output.

To view the Determination Histogram chart, select Determination Histogram under the Chart node
of the Outline view. Use the Properties view as follows:

Properties:

Determination Type: set to Linear or Quadratic.

Threshold R2: enables you to filter the input parameters by hiding the input parameters with a determin-
ation coefficient lower than the given threshold.

Choose the output parameter

Change various generic chart properties.

Response Surface Component Reference


Builds a response surface from the DOE design points' input and output values based on the chosen
Response Surface Type. In the Response Surface tab, you can view the input parameter limits and
initial values, set the properties of the response surface algorithm, view the Response Points table, and
view several types of Response charts.

Related Topics:

Working with Design Points (p. 193)

"Using Response Surfaces" (p. 77)

Standard Response Surface - Full 2nd-Order Polynomial (p. 77)

Kriging (p. 78)

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 35
DesignXplorer Systems and Components

Non-Parametric Regression (p. 83)

Neural Network (p. 83)

Sparse Grid (p. 84)

Performing a Manual Refinement (p. 92)

How Kriging Auto-Refinement Works (p. 79)

Goodness of Fit (p. 92)

Min-Max Search (p. 98)

The following views in the Response Surface tab allow you to customize your response surface and
view the results:

Outline: Allows you to:

Select the Response Surface cell and change its properties, including Response Surface Type.

Select input parameters to view their properties.

Select the output parameters and view their minimum and maximum values, and set their transformation
type.

Enable or disable the Min-Max Search and select the Min-Max Search cell to view the table of results.

View Goodness of Fit information for all response surfaces for the system.

Select a response point and view its input and output parameter properties.

You can change the input values in the Properties view and see the corresponding output values.

From the right-click context menu, you can reset the input parameter values of the selected response
point to the initial values that were used to solve for the response surface.

In the Properties view, you can enter notes about the selected response point.

Select added response points and view their properties. Drag charts from the toolbox to the response
points to view data for the added response points. You can insert a new chart to the response point via
the right-click menu.

Select one of the available charts for display. When a chart is selected, you can change the data properties
of the chart (chart type and parameters are displayed).

Properties:

Preserve Design Points After DX Run: Select this check box if you want to retain at the project level
design points that are created when refinements are run for this response surface. If this property is set,
the following property is available:

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
36 of ANSYS, Inc. and its subsidiaries and affiliates.
DesignXplorer Components

Export Project for Each Preserved Design Point: If selected, in addition to saving the design points
to the project Table of Design Points, exports each of the design exploration design points as a sep-
arate project in the same directory as the original project..

Note

Selecting this box will not preserve any design points unless you run either a manual re-
finement or one of the Kriging refinements, since the Response Surface uses the design
points generated by the DOE. If the DOE of the Response Surface does not preserve design
points, when you perform a refinement, only the refinement points will be preserved at
the project level. If the DOE is set to preserve design points, and the Response Surface is
set to preserve design points, when you perform a refinement the project will contain the
DOE design points and the refinement points.

Number of Retries: Indicates the number of times DesignXplorer will try to update the failed design
points. If the Retry All Failed Design Points option is not selected in the Options dialog, defaults to 0.
Otherwise, defaults to the value specified for the Number of Retries option.

Retry Delay (seconds): If the Number of Retries property is not set to 0, indicates how much time
will elapse between tries.

Choice of Response Surface Type algorithm

Standard Response Surface Full 2nd-Order Polynomial

Kriging

Non-Parametric Regression

Neural Network

Sparse Grid

Refinement Type: The Manual option is available for all Response Surface Types (except Sparse Grid,
which only supports auto-refinement), allowing you to enter refinement points manually; the Auto-Re-
finement option is also available for the Kriging Response Surface Type.

Generate Verification Points: Select this check box to have the response surface generating and calcu-
lating the number of verification points of your choice (1 by default). The results are included in the table
and chart of the Goodness of Fit information.

Settings for selected algorithm, as applicable

Table:

Displays a list of the refinement points created for this Response Surface. Refinement points created
manually and generated by using the Kriging refinement capabilities are listed in this table. Add a
manual refinement point by entering a value for one of the inputs on the New Refinement Point row of
the table.

Displays a list of the response points for the response surface. Displays input and output parameters for
each response point. You can change the points in the table and add new ones in several different ways:

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 37
DesignXplorer Systems and Components

You can change the input values of a response point and see the corresponding output value.

You can reset the input parameter values of a response point to the initial values that were used to
solve for the Response Surface by right-clicking on the response point in the table and selecting Reset
Response Point.

You can add response points or refinement points to the table by right-clicking on a chart such as the
Response Surface chart to select a point, or selecting one or several rows in a table (for example, the
Min-Max Search or the optimization Candidates tables), and selecting Explore Response Surface at
Point, or Insert as Refinement Point(s).

You can create new response points by directly entering values into the input parameter cells of the
row designated with an asterisk (*).

Displays a list of the verification points for the response surface. Verification points created manually and
generated automatically during the response surface update are listed in this table. Various operations
are available for the verification points:

You can add a verification point by entering the input parameter values.

You can delete one or several verification points using the right-click Delete operation.

You can update one or several verification points using the right-click Update operation, which performs
a design point update for each of the selected points.

Chart: Displays the available charts described below for each response point.

Related Topics:
Response Chart
Local Sensitivity Charts
Spider Chart

Response Chart
The Response chart allows you to graphically view the impact that changing each input parameter has
on the displayed output parameter. Select the Response cell in the Outline to display the Response
chart in the Chart view.

You can add response points to the Response Surface Table by right-clicking on the Response chart
and selecting Explore Response Surface at Point, Insert as Design Point, Insert as Refinement Point
or Insert as Verification Point. Use the Properties view as follows:

Properties:

Display Parameter Full Name: Specify whether the full parameter name or the parameter ID will be dis-
played on the chart.

Mode: Set to 2D, 3D, or 2D Slices.

Chart Resolution Along X: Set the number of points that you want to appear on the X-axis response
curve. Defaults to 25.

Chart Resolution Along Y: Set the number of points that you want to appear on the Y-axis response
curve (for 3D). Defaults to 25.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
38 of ANSYS, Inc. and its subsidiaries and affiliates.
DesignXplorer Components

Number of Slices: Set the number of slices (for 2D Slices with continuous input parameters).

Show Design Points: Specify whether design points will be displayed on the chart.

Choose the input parameter(s) to display on the X Axis, Y Axis (3D), and Slice Axis (2D Slices).

Choose the output parameter to display on the Z Axis (3D) or Y Axis (2D and 2D Slices).

Use the sliders or drop-down menus to change the values of the input parameters that are not displayed
to see how they affect the values of displayed parameters. You can enter specific values in the boxes
above the sliders.

View the interpolated output parameter values for the selected set of input parameter values.

Change various generic chart properties.

Related Topics:

Response Surface Charts (p. 99)

Local Sensitivity Charts


Allows you to graphically view the impact that changing each input parameter has on the output
parameters. Select the Local Sensitivity or Local Sensitivity Curves chart cell under Response Point
in the Outline to display the chart in the Chart view. Use the Properties view as follows:

Properties:

Chart Mode: Set to Bar or Pie (Local Sensitivity chart).

Axis Range: Set to Use Min Max of the Output Parameter or Use Curves Data (Local Sensitivity Curves
chart).

Chart Resolution: Set the number of points per curve. Defaults to 25.

Use the sliders to change the values of the input parameters to see how the sensitivity is changed for
each output.

View the interpolated output parameter values for the selected set of input parameter values.

Change various generic chart properties.

Related Topics:

Response Surface Charts (p. 99)

Using Local Sensitivity Charts (p. 112)

Spider Chart
Allows you visualize the impact that changing the input parameters has on all of the output parameters
simultaneously. Select the Spider chart cell in the Outline to display the chart in the Chart view. Use
the Properties view as follows:

Properties:

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 39
DesignXplorer Systems and Components

Use the sliders to change the values of the input parameters to see how they affect the output parameters.

View the interpolated output parameter values for the selected set of input parameter values.

Change various generic chart properties.

You can also look at this chart data a different way. First, right-click on the chart background, select
Edit Properties, and change Chart Type to Parallel Coordinate. Then click on the Spider cell under
Response Point in the Outline view and use the input parameter sliders in Properties view to see
how specific settings of various input parameters change the output parameters, which are arranged
in parallel 'Y' axes in the chart.

Related Topics:

Response Surface Charts (p. 99)

Using the Spider Chart (p. 112)

Optimization Component Reference


Goal Driven Optimization (GDO) is a constrained, multi-objective optimization technique in which the
best possible designs are obtained from a sample set given the objectives or constraints you set for
parameters. Each type of GDO system (Response Surface Optimization and Direct Optimization)
contains an Optimization component. The DOE and Response Surface cells are used in the same way
as described previously in this section. For Direct Optimization, the creation of data transfer links is also
possible.

Related Topics:

"Using Goal Driven Optimization" (p. 125)

Understanding Goal Driven Optimization (p. 224)

The following views in the Optimization tab allow you to customize your GDO and view the results:

Outline: Allows you to select the following nodes and perform related actions in the tab:

Optimization node:

Change optimization properties and view the size of the generated sample set.

View an optimization summary with details on the study, method, and returned candidate points.

View the Convergence Criteria chart for the optimization. See Using the Convergence Criteria Chart (p. 162).

Objectives and Constraints node:

View the objectives and constraints defined in the project.

Enable or disable objectives and constraints.

Select an objective or constraint and view its properties, the calculated minimum and maximum values
of each of the outputs, and History chart. See History Chart (p. 43) for details.

Domain node:

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
40 of ANSYS, Inc. and its subsidiaries and affiliates.
DesignXplorer Components

View the available input parameters for the optimization.

Enable or disable input parameters or parameter relationships.

Select an input parameter or parameter relationship to view or edit its properties, or to see its History
chart. See History Chart (p. 43) for details.

Raw Optimization Data node: For Direct Optimization systems, when an optimization update is finished,
the design point data calculated during the optimization is saved by DesignXplorer. The data can be ac-
cessed by clicking the Raw Optimization Data node in the Outline view.

Note

The design point data is displayed without analysis or optimization results; the data does
not show feasibility, ratings, Pareto fronts, etc.

Convergence Criteria node: Select this node to view the Convergence Criteria chart and specify the cri-
teria to be displayed. See Using the Convergence Criteria Chart (p. 162).

Results node:

Select one of the results types available to view results in the Charts view and in some cases, in the
Table view. When a result is selected, you can change the data properties of its related chart (x- and
y-axis parameters, parameters displayed on bar chart, etc.) and edit its table data.

Properties: When the Optimization node is selected in the Outline view, the Properties view allows
you to specify:

Method Name

MOGA

NLPQL

MISQP

Screening

Adaptive Single-Objective

Adaptive Multiple-Objective

External optimizers as defined by the optimization extension(s) loaded to the project. For more inform-
ation, see Performing an Optimization with an External Optimizer (p. 146).

Relevant settings for the selected Method Name. Depending on the method of optimization, these can
include specifications for samples, sample sets, number of iterations, and allowable convergence or Pareto
percentages.

See Goal Driven Optimization Methods (p. 128) for more information.

Table: Before the update, allows you to specify the following input parameter domain settings and
objective/constraint settings:

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 41
DesignXplorer Systems and Components

Optimization Domain

Set the Upper Bound and Lower Bound for each input parameter. For NLPQL and MISQP optimization,
there is also a Starting Value setting.

Set the Left Expression, Right Expression, and Operator for each parameter relationship.

See Defining the Optimization Domain (p. 148) for more information.

Optimization Objectives and Constraints

You can define an Objective and/or a Constraint for each parameter. Options vary according to para-
meter type.

For any parameter with an Objective Type of Seek Target, you can specify a Target.

For any parameter with a Constraint Type of Lower Bound <= Values <= Upper Bound, you can
specify a target range with the Lower Bound and Upper Bound.

For any parameter with an Objective or Constraint defined, you can specify the relative Objective
Importance or Constraint Importance of that parameter in regard to the other objectives.

For any parameter with a Constraint defined (i.e., output parameters, discrete parameters, or continuous
parameters with Manufacturable Values), you can specify the Constraint Handling for that parameter.

See Defining Optimization Objectives and Constraints (p. 153) for more information on constraints
and constraint handling options.

During a Direct Optimization update, if you select any objective, constraint, or input parameter from
the Outline view, the Table view shows all of the design points being calculated by the optimization.
For iterative optimization methods, the display is refreshed dynamically after each iteration, allowing
you to track the progress of the optimization by simultaneously viewing design points in the Table
view, the History charts in the Charts view, and the History chart sparklines in the Outline view. (For
the Screening optimization method, these objects are updated only after the optimization has completed.)

After the update, when you select Candidate Points under the Results node in the Outline view, the
Table view displays up to the maximum number of requested candidates generated by the optimization.
The number of gold stars or red crosses displayed next to each objective-driven parameter indicate
how well the parameter meets the stated objective, from three red crosses (the worst) to three gold
stars (the best). The Table view also allows you to add and edit your own candidate points, view values
of candidate point expressions, and calculates the percentage of variation for each parameter for which
an objective has been defined. For more information, see Working with Candidate Points (p. 157).

Note

Goal-driven parameter values with inequality constraints receive either three stars (the
constraint is met) or three red crosses (the constraint is violated).

Predicted output values can be verified for each candidate by using the contextual menu entry Verify
Candidates by Design Point Update (p. 161).

Results: The following results types are available:

Convergence Criteria Chart (p. 43)

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
42 of ANSYS, Inc. and its subsidiaries and affiliates.
DesignXplorer Components

History Chart (p. 43)

Candidate Points Results (p. 44)

Tradeoff Chart (p. 44)

Samples Chart (p. 45)

Sensitivities Chart (p. 45)

Convergence Criteria Chart


Allows you to view the evolution of the convergence criteria for each of the iterative optimization
methods and is updated after each iteration. It is not available for the Screening optimization method.

The Convergence Criteria chart is the default optimization chart, so displays in the Chart view unless
another type of chart is selected. When the Convergence Criteria node is selected in the Outline view,
the Properties view displays the convergence criteria relevant to the selected optimization method in
read-only mode. Various generic chart properties can be changed for the Convergence Criteria chart.

The chart remains available when the optimization update is complete. The legend shows the color-
coding for the convergence criteria.

Related Topics:

Using the Convergence Criteria Chart (p. 162)

Using the Convergence Criteria Chart for Multiple-Objective Optimization (p. 163)

Using the Convergence Criteria Chart for Single-Objective Optimization (p. 164)

History Chart
Allows you to view the history of a single enabled objective/constraint, input parameter, or parameter
relationship during the update process. For iterative optimization methods, the History chart is updated
after each iteration. For the Screening optimization method, it is updated only when the optimization
is complete.

Select an item under the Objectives and Constraints node or an input parameter or parameter rela-
tionship under the Domain node in the Outline view. When an object is selected, the Properties view
displays various properties for the object. Various generic chart properties can be changed for both
types of History chart.

In the Chart view, the color-coded legend allows you to interpret the chart. In the Outline view, a
sparkline graphic of the History chart is displayed next to each objective/constraint and input parameter
object.

Related Topics:

Using the History Chart (p. 165)

Working with the History Chart in the Chart View (p. 166)

Viewing History Chart Sparklines in the Outline View (p. 169)

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 43
DesignXplorer Systems and Components

Using the Objective/Constraint History Chart (p. 170)

Using the Input Parameter History Chart (p. 171)

Candidate Points Results


The Candidate Points Results object is comprised of both the Chart and Table views to display candidate
points and data for one or more selected parameters. The Chart view shows a color-coded legend that
allows you interpret the samples, candidate points identified by the optimization, candidates inserted
manually, and candidates for which output values have been verified by a design point update. In the
Table view, output parameter values calculated from a simulation are displayed in black text, while
output parameter values calculated from a response surface are displayed in a custom color specified
in the Options dialog. For details, see Response Surface Options (p. 23)

Select Candidate Points under the Results node in the Optimization tab Outline view.

Properties: (applied to results in the Table and Chart views)

Display Parameter Relationships: Select to display parameter relationships in the candidate points table.

Display Full Parameter Name: Select to display the full parameter name in the results.

Show Candidates: Select to show candidates in the results.

Coloring Method: Specify whether the results will be colored according to candidate type or source type.

Show Samples: Select to show samples in the results.

Show Starting Point: Select to show the starting point on the chart (NLPQL and MISQP only).

Show Verified Candidates: Select to show verified candidates in the results (Response Surface Optimization
only).

Enable or disable the display of input and output parameters.

Change various generic chart properties for the results in the Chart view.

Related Topics:

Using the Candidate Points Results (p. 173)

Understanding the Candidate Points Results Display (p. 173)

Candidate Points Results: Properties (p. 174)

Tradeoff Chart
Allows you to view the Pareto fronts created from the samples generated in the GDO. Select the Tradeoff
chart cell under Charts in the Outline to display the chart in the Chart view. Use the Properties view
as follows:

Properties:

Chart Mode: Set to 2D or 3D.

Number of Pareto Fronts to Show: Set the number of Pareto fronts that are displayed on the chart.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
44 of ANSYS, Inc. and its subsidiaries and affiliates.
DesignXplorer Components

Show infeasible points: Enable or disable the display of infeasible points. (Available when constraints
are defined.)

Click on a point on the chart to display a Parameters section that shows the values of the input and
output parameters for that point.

Set the parameters to be displayed on the X- and Y-axis.

Change various generic chart properties.

Related Topics:

Using the Tradeoff Chart (p. 176)

Using Tradeoff Studies (p. 176)

Samples Chart
Allows you to visually explore a sample set given defined objectives. Select the Samples chart cell under
Charts in the Outline to display the chart in the Chart view. Use the Properties view as follows:

Properties:

Chart Mode: Set to Candidates or Pareto Fronts. For Pareto Fronts, the following options can be set:

Number of Pareto Fronts to Show: Either enter the value or use the slider to select the number of
Pareto Fronts displayed.

Coloring Method: Can be set to per Samples or per Pareto Front.

Show infeasible points: Enable or disable the display of infeasible points. (Available when constraints
are defined.)

Click on a line on the chart to display the values of the input and output parameters for that line in the
Parameters section. Use the Enabled check box to enable or disable the display of parameter axes on
the chart.

Change various generic chart properties.

Sensitivities Chart
Allows you to graphically view the global sensitivities of each output parameter with respect to the
input parameters. Select the Sensitivities chart cell under Charts in the Outline to display the chart
in the Chart view. Use the Properties view as follows:

Properties:

Chart Mode: Set to Bar or Pie.

Enable or disable the parameters that are displayed on the chart.

Change various generic chart properties.

Related Topics:

Working with Sensitivities (p. 204)

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 45
DesignXplorer Systems and Components

Using the Sensitivities Chart (GDO) (p. 175)

Six Sigma Analysis Component Reference


The Six Sigma Analysis (SSA) system consists of a Design of Experiments cell, a Response Surface
cell, and a Six Sigma Analysis cell. The Design of Experiments cell allows you to set up the input
parameters and generate the samples for the analysis. The Response Surface that results will be the
same as the Response Surface from a standard Response Surface analysis. The Six Sigma Analysis cell
allows you to set up the analysis type and view the results of the analysis.

By default, the Six Sigma Analysis DOE includes all of the input parameters and sets them to be uncer-
tainty parameters. When you run a Six Sigma Analysis, you will need to set up the uncertainty parameters
in the SSA DOE. If you want any input parameters to be treated as deterministic parameters, uncheck
the box next to those parameters in the SSA DOE Outline view.

Related Topics:
Design of Experiments (SSA)
Six Sigma Analysis
Sensitivities Chart (SSA)

Design of Experiments (SSA)


The Design of Experiments component in a Six Sigma Analsys system is the same as the Design of
Experiments component in other systems, with the exception of the input parameters being designated
as uncertainty parameters. You will need to set up the input parameter options before solving the DOE.

Related Topics:

Design of Experiments Component Reference (p. 29)

The following views in the Design of Experiments tab contain components that are unique to the Six
Sigma Analysis Design of Experiments:

Outline: Allows you to:

Select the input parameters and change their distribution properties.

View the Skewness and Kurtosis properties for each input parameter distribution.

View the calculated Mean and Standard Deviation for all Distribution Types except Normal and Truncated
Normal where you can set those values.

View the initial Value for each input parameter.

View the Lower Bound and Upper Bound (bounds used to generate the Design of Experiments) for each
parameter.

Properties: (The properties for the uncertainty input parameters are described here.)

Distribution Type: Type of distribution associated with the input parameter

Uniform

Triangular

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
46 of ANSYS, Inc. and its subsidiaries and affiliates.
DesignXplorer Components

Normal

Truncated Normal

Lognormal

Exponential

Beta

Weibull

Maximum Likely Value (Triangular Only)

Log Mean (Lognormal only)

Log Standard Deviation (Lognormal only)

Exponential Decay Parameter (Exponential only)

Beta Shape R (Beta only)

Beta Shape T (Beta only)

Weibull Exponent (Weibull only)

Weibull Characteristic Value (Weibull only)

Non-Truncated Normal Mean (Truncated Normal only)

Non-Truncated Normal Standard Deviation (Truncated Normal only)

Mean (Normal only)

Standard Deviation (Normal only)

Distribution Lower Bound (Can be set for all but Normal and Lognormal)

Distribution Upper Bound (Uniform, Triangular, Truncated Normal, and Beta only)

Table: When output parameters or charts are selected in the Outline view, displays the normal DOE
data grid of the design points that populates automatically during the solving of the points. When an
input parameter is selected in the Outline, displays a table that shows for each of the samples in the
set:

Quantile: The input parameter value point for the given PDF and CDF values.

PDF: (Probability Density Function) density of the input parameter along the X-axis.

CDF: (Cumulative Distribution Function) is the integration of PDF along the X-axis.

Chart: When an input parameter is selected in the Outline view, displays the Probability Density
Function and the Cumulative Distribution Function for the Distribution Type chosen for the input
parameter. There are also Parameters Parallel and Design Points vs Parameter charts as you would have
in a standard DOE.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 47
DesignXplorer Systems and Components

Six Sigma Analysis


Once you have assigned the distribution functions to the input parameters in the DOE, you can update
the project to perform the Six Sigma Analysis. Once the analysis is finished, edit the Six Sigma Analysis
cell in the project to see the results.

Related Topics:

"Using Six Sigma Analysis" (p. 181)

Statistical Measures (p. 182)

The following views in the Six Sigma Analysis tab to allow you to customize your analysis and view
the results:

Outline: Allows you to:

Select each input parameter and view its distribution properties, statistics, upper and lower bounds, initial
value, and distribution chart.

Select each output parameter and view its calculated maximum and minimum values, statistics and distri-
bution chart.

Set the table display format for each parameter to Quantile-Percentile or Percentile-Quantile .

Properties:

For the Six Sigma Analysis, the following properties can be set:

Sampling Type: Type of sampling used for the Six Sigma Analysis

LHS

WLHS

Number of Samples

For the parameters, the following properties can be set:

Probability Table: Manner in which the analysis information is displayed in the Table view.

Quantile-Percentile

Percentile-Quantile

Table: Displays a table that shows for each of the samples in the set:

<Parameter Name>: a valid value in the parameter's range.

Probability: Probability that the parameter will be less than or equal to the parameter value shown.

Sigma Level: The approximate number of standard deviations away from the sample mean for the given
sample value.

If Probability Table is set to Quantile-Percentile in the Statistics section of the Properties view, you
can edit a parameter value and see the corresponding Probability and Sigma Level values. If it is set

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
48 of ANSYS, Inc. and its subsidiaries and affiliates.
DesignXplorer Components

to Percentile-Quantile, the columns are reversed and you can enter a Probability or Sigma Level
value and see the corresponding changes in the other columns.

Chart: When a parameter is selected in the Outline view, displays its Probability Density Function
and the Cumulative Distribution Function. A global Sensitivities chart is available in the outline.

Sensitivities Chart (SSA)


Allows you to graphically view the global sensitivities of each output parameter with respect to the
input parameters in the Six Sigma Analysis. Select the Sensitivities chart cell under Charts in the
Outline view to display the chart in the Chart view. Use the Properties view as follows:

Properties:

Chart Mode: Set to Bar or Pie.

Enable or disable the parameters that are displayed on the chart.

Change various generic chart properties.

Related Topics:

Working with Sensitivities (p. 204)

Statistical Sensitivities in a Six Sigma Analysis (p. 270)

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 49
Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
50 of ANSYS, Inc. and its subsidiaries and affiliates.
Using Parameters Correlation
The application of Goal Driven Optimization (GDO) and Six Sigma Analysis (SSA) in a finite element
based analysis framework is always a challenge in terms of solving time, especially when the finite ele-
ment model is large. For example, hundreds or thousands of finite element simulation runs in SSA is
not uncommon. If one simulation run takes hours to complete, it is almost impractical to perform SSA
at all with thousands or even hundreds of simulations.

To perform GDO or SSA in finite element based analysis framework, it is always recommended to perform
a Design of Experiments (DOE) study. From the DOE study, a response surface is built within the design
space of interest. After a response surface is created, all simulation runs of GDO or SSA will then become
function evaluations.

In the DOE study, however, the sampling points increase dramatically as number of input parameters
increases. For example, a total of 149 sampling points (finite element evaluations) are needed for 10
input variables using Central Composite Design with fractional factorial scheme. As the number of input
variables increases, the analysis becomes more and more intractable. In this case, one would like to
exclude unimportant input parameters from the DOE sampling in order to reduce unnecessary sampling
points. A correlation matrix is a tool to help users identify input parameters deemed to be unimportant,
and thus treated as deterministic parameters in SSA.

When to Use Parameters Correlation


As you add more input parameters to your Design of Experiments (DOE), the increase in the number
of design points can decrease the efficiency of the analysis process. In this case, you may wish to focus
on the most important inputs, while excluding inputs with a lesser impact on your intended design.
Removing these less important parameters from the DOE reduces the generation of unnecessary sampling
points.

Benefits of Parameters Correlation


A Parameters Correlation study allows you to:

Determine which input parameters have the most (and the least) impact on your design.

Identify the degree to which the relationship is linear/quadratic.

It also provides a variety of charts to assist in your assessment of parametric impacts. For more inform-
ation, see Parameters Correlation Charts (p. 55).

This section covers the following topics:


Sample Generation
Running a Parameters Correlation
Viewing the Quadratic Correlation Information
Determining Significance
Viewing Significance and Correlation Values
Parameters Correlation Charts

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 51
Using Parameters Correlation

Sample Generation
The importance of an input to an output is determined from their correlation. The samples used for
correlation calculation are generated with the Latin Hypercube Sampling (LHS) method. The Latin
Hypercube samples are generated in such a way that the correlations among the input parameters are
less than or equal to 5%. Also, the Latin Hypercube samples are generated in such a way that each
sample is randomly generated, but no two points share input parameters of the same value.

The Optimal Space-Filling Design (OSF) method of sample generation is an LHS design that is extended
with postprocessing to achieve uniform space distribution of points, maximizing the distance between
points. The image below illustrates how samples generated via the LHS method vary in placement from
those generated by the OSF sampling method.

The image below illustrates the generation of 20 samples via the Monte Carlo, LHS, and OSF methods.

Pearsons Linear Correlation


Uses actual data for correlation evaluation.

Correlation coefficients are based on the sample values.

Used to correlate linear relationships.

Spearmans Rank Correlation


Uses ranks of data.

Correlation coefficients are based on the rank of samples.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
52 of ANSYS, Inc. and its subsidiaries and affiliates.
Running a Parameters Correlation

Recognizes non-linear montonic relationships (which are less restrictive than linear ones). In a monotonic
relationship, one of the following two things happens:

As the value of one variable increases, the value of the other variable increases as well.

As the value of one variable increases, the value of the other variable decreases.

Deemed the more accurate method.

Running a Parameters Correlation


To create a Parameters Correlation system in your Project Schematic, do the following:

1. With the Project Schematic displayed, drag a Parameters Correlation template from the design explor-
ation area of the toolbox directly under the Parameter Set bar of the schematic.

2. After placing the Parameters Correlation system n the Project Schematic, double-click on the Parameters
Correlation cell to open the corresponding tab and set up the correlation options described in Parameters
Correlation Component Reference (p. 31).

3. Right-click the Parameters Correlation cell and select Update from the context menu.

4. When the update is finished, examine the results using the various charts shown in the Outline view.
See Parameters Correlation Component Reference (p. 31) for more information about the available charts.

Auto Stop Type


If the Auto Stop Type property is set to:

Execute All Simulations:

DesignXplorer updates the number of design points specified by the Number of Samples property.

Enable Auto Stop:

The number of samples required to calculate the correlation is determined according to convergences
of the means and standard deviations of the output parameters. At each iteration, the Mean and
Standard Deviation convergences are checked against the level of accuracy specified by the Mean
Value Accuracy and Standard Deviation Accuracy properties. See Correlation Convergence Pro-
cess (p. 54).

DesignXplorer attempts to minimize the number of design points to be updated by monitoring the
evolution of the output parameter values, and will stop calculating design points as soon as the level
of accuracy is met (i.e. the Mean and Standard Deviation are stable for all output parameters). If the

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 53
Using Parameters Correlation

process has converged, DesignXplorer stops even if the number of calculated points specified by the
Number of Samples property has not been reached.

Correlation Convergence Process


Convergence of the correlation is determined as follows:

1. The convergence status is checked each time the number of points specified in the Convergence Fre-
quency Check property have been updated.

2. For each output parameter:

The Mean and the Standard Deviation are calculated based on all the up-to-date design points available
at this step.

The Mean is compared with the Mean at the previous step. It is considered to be stable if the difference
is smaller than 1% by default (Mean Value Accuracy = 0.01).

The Standard Deviation is compared with the Standard Deviation at the previous step. It is considered
to be stable if the difference is smaller than 2% by default (Standard Deviation Accuracy = 0.02).

3. If the Mean and Standard Deviation are stable for all output parameters, the correlation is converged.

The convergence status is indicated by the value of the Converged property. When the process is
converged, the Converged property equals Yes and the possible remaining unsolved samples are
automatically removed. If the process has stopped because the Number of Samples is reached before
convergence, then the Converged property equals No.

Preview Parameters Correlation


The Preview operation is made available for the Parameters Correlation feature. As in other features, it
allows the user to preview the list of samples to be calculated, depending on the selected options. The
preview operation allows you to try various sampling options and see the list of samples to be generated,
without actually running the samples (Update).

Monitoring and Interrupting the Update Operation


During a long update operation (direct solve Correlation context), you can monitor the progresses
easily by opening the Progress view and observing the table of samples which refreshes automatically
when results are returned to DesignXplorer.

By clicking on the Stop button in the Progress view, you can interrupt the update operation. Then, if
there are enough samples calculated, partial correlation results are generated. You can review these
results in the table view and by selecting a chart object in the outline.

To restart the update operation, right-click the Parameters Correlation cell and select Update from
the context menu. If editing the component, you can also click the Update button on the Toolbar. The
process will restart where it was interrupted.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
54 of ANSYS, Inc. and its subsidiaries and affiliates.
Parameters Correlation Charts

Viewing the Quadratic Correlation Information


Quadratic correlation between a parameter pair is proposed by its coefficient of determination (R2 ) of
quadratic regression.

Note

R2 is shown as R2 in the user interface and correlation charts.

The (R2) between all parameter pairs is displayed in the Determination Matrix. The closer R2 is to 1, the
better the quadratic regression is. The Determination Matrix is not symmetric, unlike the Correlation
Matrix. The Determination Matrix is displayed in the Table view below the Correlation Matrix. Like the
Correlation Matrix, there is a Chart associated with the Determination Matrix where any parameter can
be disabled.

In addition, the Correlation Scatter chart displays both a quadratic trend line and the trend line equation
for the selected parameter pair.

Determining Significance
Significance of an input parameter to an output parameter is determined using a statistical hypothesis
test. In the hypothesis test, a NULL hypothesis of insignificance is assumed, and is tested if it is statist-
ically true according to significance level (or acceptable risk) set by the user. From the hypothesis test,
a p-Value (probability value) is calculated, and is compared with the significance level. If the p-Value is
greater than the significance level, it is concluded that the NULL hypothesis is true, and that the input
parameter is insignificant to the output parameter, and vice versa. See the Six Sigma Analysis Theory
section for more information.

Viewing Significance and Correlation Values


If you select Sensitivities from the Outline view in the Six Sigma Analysis tab, you can review the
sensitivities derived from the samples generated for the Parameters Correlation. The Parameters Correl-
ation sensitivities are global sensitivities. In the Properties view for the Sensitivities chart, you can
choose the output parameters for which you want to review sensitivities, and the input parameters that
you would like to evaluate for the output parameters.

The default setting for the Significance Level (found in the Design Exploration section of the Tools
> Options dialog) is 0.025. Parameters with a sensitivity value above this significance will be shown
with a flat line on the Sensitivity chart, and the value displayed for those parameters when you mouse
over them on the chart will be 0. In order to view the actual correlation value of the insignificant para-
meter pair, you can either select the Correlation Matrix from the Outline and mouse over the square
for that pair in the matrix, or you can set the Significance Level to 1, which bypasses the significance
test and displays all input parameters on the Sensitivities chart with their actual correlation values.

Parameters Correlation Charts


Parameters Correlation provides five different charts that allow you to assess parametric impacts in your
project: Correlation Matrix, Determination Matrix, Correlation Scatter, Sensitivities, and Determination
Histogram.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 55
Using Parameters Correlation

When the Parameters Correlation is updated, one instance of each chart is added to the project. In the
Parameters Correlation tab, you can view each chart by clicking on it under the Charts node of the
Outline view.

To add a new instance of a chart, double-click it in the Toolbox. The chart will be added to the bottom
of the Charts list.

To add a Correlation Scatter chart for a particular parameter combination, you can also right-click the
associated cell in the Correlation Matrix chart and select Insert <x-axis> vs <y-axis> Correlation
Scatter.

Related Topics:
Using the Correlation Matrix Chart
Using the Correlation Scatter Chart
Using the Determination Matrix Chart
Using the Determination Histogram Chart
Using the Sensitivities Chart

Using the Correlation Matrix Chart


The Correlation Matrix chart is a visual rendering of information in the Correlation Matrix table. The
Correlation Coefficient indicates if there is a relationship between two variables and indicates
whether the relationship is a positive or negative number.

Color-coding of the cells indicates the strength of the correlation (the R2). The R2 value is displayed
when you hover your mouse over a cell. The closer the R2 value is to 1, the stronger the relationship.

In the Correlation Matrix below, we can see that input parameter P5LENGTH is a major input because
it drives all the outputs.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
56 of ANSYS, Inc. and its subsidiaries and affiliates.
Parameters Correlation Charts

On the other hand, we can see that input parameter P13PIPE_Thicknessis not important to the study
because it has little impact on the outputs. In this case, you might want to disable P13PIPE_Thickness
by deselecting the Enabled check box in the Properties view. When the input is disabled, the chart
changes accordingly.

To disable parameters, you can also right-click on a cell corresponding to that parameter and select an
option from the context menu. You can disable the selected input, disable the selected output, disable
all other inputs, or disable all other outputs.

To generate a Correlation Scatter chart for a given parameter combination, right-click on the correspond-
ing cell in the Correlation Matrix chart and select Insert <x-axis> vs <y-axis> Correlation Scatter.

You can also select the Export Data context option to export the correlation matrix data to a CSV file.

Using the Correlation Scatter Chart


The Correlation Scatter chart allows you to plot linear and quadratic trend lines for the samples and
extract the linear and quadratic Coefficient of Determination (R2). In other words, it conveys the degree
of quadratic correlation between two parameters pair via a graphical presentation of linear and quad-
ratic trends.

Note

R2 is shown as R2 in the user interface and correlation charts.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 57
Using Parameters Correlation

You can create a Correlation Scatter chart for a given parameter combination by right-clicking the cor-
responding cell in the Correlation Matrix chart and selecting Insert <x-axis> vs <y-axis> Correlation
Scatter from the context menu.

In this example, in the Trend Lines section of the Properties view, you can see the equations for both
the Linear and Quadratic values for R2.

Since both the Linear and Quadratic properties are enabled in this example:

The equations for the linear and quadratic trend lines are shown in the chart legend.

The linear and quadratic trend lines are each represented by a separate line on the chart. The closer
the samples lie to the curve, the closer the Coefficient of Determination will be to the optimum value
of 1.

When you export the Correlation Scatter chart data to a CSV file or generate a report, the trend line
equations are included in the export and are shown in the CSV file or Project Report.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
58 of ANSYS, Inc. and its subsidiaries and affiliates.
Parameters Correlation Charts

Using the Determination Matrix Chart


The Determination Matrix chart is a visual rendering of then nonlinear (quadratic) information in the
Determination Matrix table. It shows the Coefficient of Determination (R2) between all parameter pairs.
Unlike the Correlation Matrix, however, the Determination Matrix chart is not symmetric.

Color-coding of the cells indicates the strength of the correlation (the R2). The R2 value is displayed
when you hover your mouse over a cell. The closer the R2 value is to 1, the stronger the relationship.

In the Determination Matrix below, we can see that input parameter P5Tensile Yield Strength is a
major input because it drives all the outputs.

You may want to disable inputs that have little impact on the outputs. To disable parameters in the
chart:

Deselect the Enabled check box in the Properties view.

Right-click on a cell corresponding to that parameter and select an option from the context menu. You
can disable the selected input, disable the selected output, disable all other inputs, or disable all other
outputs.

You can also select the Export Data context option to export the correlation matrix data to a CSV file.
For more information, see Extended CSV File Format (p. 281).

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 59
Using Parameters Correlation

Using the Determination Histogram Chart


The Determination Histogram chart allows you to see what inputs drive a selected output parameter.
You can set the Determination Type property to Linear or Quadratic. The Threshold R2 (%) property
allows you to filter the input parameters by hiding the input parameters with a determination coefficient
lower than the given threshold.

When you view a Determination Histogram chart, you should also check the Full Model R2 (%) value
to see how well output variations are explained by input variations. This value represents the variability
of the output parameter that can be explained by a linear or quadratic correlation between the input
parameters and the output parameter. The closer this value is to 100%, the more certain it is that output
variations result from the inputs. The lower the value, the more likely it is that other factors such as
noise, mesh error, or an insufficient number of points may be causing the output variations.

In the image below, you can see that input parameters P3LENGTH, P2HEIGHT, and P4FORCE all
affect output P8DISPLACEMENT. You can also see that of the three inputs, P3LENGTH has by far
the greatest impact.

In the example below, you can see that the value for a linear determination is 96.2%.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
60 of ANSYS, Inc. and its subsidiaries and affiliates.
Parameters Correlation Charts

To view the chart for a quadratic determination, in the Properties view, set Determination Type to
Quadratic. In the example below, we can see that with a quadratic determination type, input P5YOUNG
is shown to also have a slight impact on P8DISPLACEMENT. (You can filter your inputs to keep only
the most important parameters, enabling or disabling them with the check boxes in the Outline view.)

In this example, we can also see that the Full Model R2 (%) value is improved slightly, now raised to
97.436%.

Using the Sensitivities Chart


The Sensitivities chart shows global sensitivities of the output parameters with respect to the input
parameters. Positive sensitivity occurs when increasing the input increases the output. Negative sensit-
ivity occurs when increasing the input decreases the output.

The Sensitivities chart can be displayed in either Bar or Pie mode. The chart below is displayed in Bar
mode.

Generally, the impact of an input parameter on an output parameter is driven by the following two
things:

The amount by which the output parameter varies across the variation range of an input parameter.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 61
Using Parameters Correlation

The variation range of an input parameter. Typically, the wider the variation range is, the larger the impact
of the input parameter will be.

The statistical sensitivities are based on the Spearman-Rank Order Correlation coefficients that simultan-
eously take both aspects into account.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
62 of ANSYS, Inc. and its subsidiaries and affiliates.
Using Design of Experiments
Design of Experiments (DOE) is a technique used to scientifically determine the location of sampling
points and is included as part of the Response Surface, Goal Driven Optimization, and Analysis systems.
There are a wide range of DOE algorithms or methods available in engineering literature. These tech-
niques all have one common characteristic: they try to locate the sampling points such that the space
of random input parameters is explored in the most efficient way, or obtain the required information
with a minimum of sampling points. Sample points in efficient locations will not only reduce the required
number of sampling points, but also increase the accuracy of the response surface that is derived from
the results of the sampling points. By default, the deterministic method uses a Central Composite Design,
which combines one center point, points along the axis of the input parameters, and the points determ-
ined by a fractional factorial design.

Once you have set up your input parameters, you can update the DOE, which submits the generated
design points to the analysis system for solution. Design points are solved simultaneously if the analysis
system is set up to do so; sequentially, if not. After the solution is complete, you can update the Response
Surface cell, which generates response surfaces for each output parameter based on the data in the
generated design points.

Note

Requirements and recommendations regarding the number of input parameters vary


according to DOE type. For more information, see Number of Input Parameters for DOE
Types (p. 69).

If you change the Design of Experiments type after doing an initial analysis and preview the Design of
Experiments Table, any design points generated for the new algorithm that are the same as design
points solved for a previous algorithm will appear as up-to-date. Only the design points that are different
from any previously submitted design points need to be solved.

You should set up your DOE Properties before generating your DOE Design Point matrix. The following
topics describe setting up and solving your Design of Experiments.
Setting Up the Design of Experiments
Design of Experiments Types
Number of Input Parameters for DOE Types
Comparison of LHS and OSF DOE Types
Using a Central Composite Design DOE
Upper and Lower Locations of DOE Points
DOE Matrix Generation
Importing and Copying Design Points

Setting Up the Design of Experiments


To set up your Design of Experiments (DOE):

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 63
Using Design of Experiments

1. Open the Design of Experiments tab by double-clicking on the Design of Experiments cell in the
Project Schematic. Alternatively, you can right-click the cell and select Edit from the context menu.

2. On the Design of Experiments tab Outline view, select the Design of Experiments node.

3. In the Design of Experiments section of the Properties view, select a Design of Experiments Type.
For details, see Design of Experiments Types (p. 64).

4. Specify additional properties for the DOE. (Available properties are determined by the Design of Exper-
iments Type selected.)

5. Click the Update toolbar button.

Design of Experiments Types


The DOE types available are:
Central Composite Design (CCD)
Optimal Space-Filling Design (OSF)
Box-Behnken Design
Custom
Custom + Sampling
Sparse Grid Initialization
Latin Hypercube Sampling Design (LHS)

Central Composite Design (CCD)


Central Composite Design (CCD) is the default DOE type. It provides a screening set to determine the
overall trends of the meta-model to better guide the choice of options in Optimal Space-Filling Design.
The CCD DOE type supports a maximum of 20 input parameters. For more information, see Number of
Input Parameters for DOE Types (p. 69).

The following properties are available for the CCD DOE type.

Design Type: By specifying the Design Type for CCD, you can help to improve the response surface fit
for DOE studies. For each CCD type, the alpha value is defined as the location of the sampling point that
accounts for all quadratic main effects. The following CCD design types are available:

Face-Centered: A three-level design with no rotatability. The alpha value equals 1.0. A Template Type
setting automatically appears, with Standard and Enhanced options. Choose Enhanced for a possible
better fit for the response surfaces.

Rotatable: A five-level design that includes rotatability. The alpha value is calculated based on the
number of input variables and a fraction of the factorial part. A design with rotatability has the same
variance of the fitted value regardless of the direction from the center point.

VIF-Optimality: A five-level design in which the alpha value is calculated by minimizing a measure of
non-orthogonality known as the Variance Inflation Factor (VIF). The more highly correlated the input
variable with one or more terms in a regression model, the higher the Variance Inflation Factor.

G-Optimality: Minimizes a measure of the expected error in a prediction and minimizes the largest
expected variance of prediction over the region of interest.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
64 of ANSYS, Inc. and its subsidiaries and affiliates.
Design of Experiments Types

Auto-Defined: Design exploration automatically selects the Design Type based on the number of input
variables. Use of this option is recommended for most cases as it automatically switches between the
G-Optimal (if the number of input variables is 5) or VIF-optimal otherwise.

However, the Rotatable design may be used if the default option does not provide good values
for the Goodness of Fit from the response surface plots. Also, the Enhanced template may be used
if the default Standard template does not fit the response surfaces too well.

Template Type: Enabled for the Rotatable and Face-Centered design types. The following options are
available:

Standard

Enhanced: Choose this option for a possible better fit for the response surfaces.

For more information, see Using a Central Composite Design DOE (p. 71).

Optimal Space-Filling Design (OSF)


Optimal Space-Filling Design (OSF) creates optimal space filling Design of Experiments (DOE) plans
according to some specified criteria. Essentially, OSF is a Latin Hypercube Sampling Design (LHS) that
is extended with post-processing. It is initialized as an LHS and then optimized several times, remaining
a valid LHS (without points sharing rows or columns) while achieving a more uniform space distribution
of points (maximizing the distance between points).

To offset the noise associated with physical experimentation, classical DOE types such as CCD focus on
parameter settings near the perimeter of the design region. Because computer simulation is not quite
as subject to noise, though, the Optimal Space-Filling (OSF) design is able to distribute the design
parameters equally throughout the design space with the objective of gaining the maximum insight
into the design with the fewest number of points. This advantage makes it appropriate when a more
complex meta-modeling technique such as Kriging, Non-Parametric Regression or Neural Networks is
used.

OSF shares some of the same disadvantages as LHS, though to a lesser degree. Possible disadvantages
of an OSF design are that extremes (i.e., the corners of the design space) are not necessarily covered
and that the selection of too few design points can result in a lower quality of response prediction.

Note

The OSF DOE type is a Latin Hypercube Sampling design that is extended with post-processing.
For a comparison of the two, see Comparison of LHS and OSF DOE Types (p. 70).

The following properties are available for the OSF DOE type.

Design Type: The following choices are available:

Max-Min Distance (default): Maximizes the minimum distance between any two points. This strategy
ensures that no two points are too close to each other. For a small size of sampling (N), the Max-Min
Distance design will generally lie on the exterior of the design space and fill in the interior as N becomes
larger. Generally the faster algorithm.

Centered L2: Minimizes the centered L2-discrepancy measure. The discrepancy measure corresponds
to the difference between the empirical distribution of the sampling points and the uniform distribution.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 65
Using Design of Experiments

In other words, the centered L2 yields an uniform sampling. Computationally faster than Maximum
Entropy.

Maximum Entropy: Maximizes the determinant of the covariance matrix of the sampling points to
minimize uncertainty in unobserved locations. This option often provides better results for highly cor-
related design spaces. However, its cost increases non-linearly with the number of input parameters
and the number of samples to be generated. Thus, it is recommended only for small parametric problems.

Maximum Number of Cycles: Determines the number of optimization loops the algorithm needs, which
in turns determines the discrepancy of the DOE. The optimization is essentially combinatorial, so a large
number of cycles will slow down the process; however, this will make the discrepancy of the DOE smaller.
For practical purposes, 10 cycles usually is good for up to 20 variables. Must be greater than 0. The default
is 10.

Samples Type: Determines the number of DOE points the algorithm should generate. This option is
suggested if you have some advanced knowledge about the nature of the meta-model. The following
choices are available:

CCD Samples (default): Generates the same number of samples a CCD DOE would generate for the
same number of inputs. You may want to use this to generate a space filling design which has the same
cost as a corresponding CCD design.

Linear Model Samples: Generates the number of samples as needed for a linear meta-model.

Pure Quadratic Model Samples: Generates the number of samples as needed for a pure quadratic
meta-model (no cross terms).

Full Quadratic Samples: Generates the number of samples needed to generate a full quadratic model.

User-Defined Samples: Specify the desired number of samples.

Seed Value: Set the value used to initialize the random number generator invoked internally by the LHS
algorithm. Although the generation of a starting point is random, the seed value consistently results in a
specific LHS. This property allows you to generate different samplings (by changing the value) or to regen-
erate the same sampling (by keeping the same value). Defaults to 0.

Number of Samples: Enabled when Samples Type is set to User-Defined Samples. Specifies the default
number of samples. Defaults to 10.

Box-Behnken Design
A Box-Behnken Design is a three-level quadratic design that does not contain fractional factorial design.
The sample combinations are treated in such a way that they are located at midpoints of edges formed
by any two factors. The design is rotatable (or in cases, nearly rotatable).

One advantage of a Box-Behnken design is that it requires fewer design points than a full factorial CCD
and generally requires fewer design points than a fractional factorial CCD. Additionally, a Box-Behnken
Design avoids extremes, allowing you to work around extreme factor combinations. Consider using the
Box-Behnken Design DOE type if your project has parametric extremes (for example, has extreme
parameter values in corners that are difficult to build). Since the Box-Behnken DOE doesnt have corners
and does not combine parametric extremes, it can reduce the risk of update failures. For details, see
the Box-Behnken Design (p. 215) Theory section.

Possible disadvantages of a Box-Behnken design are:

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
66 of ANSYS, Inc. and its subsidiaries and affiliates.
Design of Experiments Types

Prediction at the corners of the design space is poor and that there are only three levels per parameter.

A maximum of 12 input parameters is supported. For more information, see Number of Input Para-
meters for DOE Types (p. 69).

No additional properties are available for the Box-Behnken Design DOE type.

Custom
The Custom DOE type allows for definition of a custom DOE Table. You can manually add new design
points, entering the input and (optionally) output parameter values directly into the table. If you previ-
ously solved the DOE using one of the other algorithms, those design points will be retained and you
can add new design points to the table. You can also import and export design points into the custom
DOE Table from the Parameter Set.

You can change the edition mode of the DOE table in order to edit the output parameter values. You
can also copy/paste data and import data from a csv file (right-click and select Import Design Points).

For more information, see Working with Tables (p. 205).

Note

You can generate a DOE matrix using one of the other design options provided, then switch
to Custom to modify the matrix. The opposite however is not possible; that is, if you define a
Custom matrix first, then change to one of the other design options provided, the matrix is
cleared.

The table can contain derived parameters. Derived parameters are always calculated by the
system, even if the table mode is All Output Values Editable.

Editing output values for a row changes the Design of Experiments cell's state to Update
Required. The DOE will need to be updated, even though no calculations are done.

The DOE charts do not reflect the points added manually using the Custom option until the
DOE is Updated.

It is expected that the Custom option will be used to enter DOE plans that were built externally.
If you use this feature to enter all of the points in the matrix manually, you must make sure to
enter enough points so that a good fitting can be created for the response surface. This is an
advanced feature that should be used with caution; always verify your results with a direct
solve.

No additional properties are available for the Custom DOE type.

Custom + Sampling
The Custom + Sampling DOE type provides the same capabilities as the Custom DOE type and allows
you to complete the DOE table automatically to fill the design space efficiently. For example, DOE table
initialized with imported design points from a previous study, or your initial DOE (Central Composite
Design, Optimal Space Filling Design or Custom) can be completed with new points. The generation
of these new design points takes into account the coordinates of previous design points.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 67
Using Design of Experiments

The following property is available for the Custom + Sampling DOE type:

Total Number of Samples: Expects the user to enter the desired number of samples (the number of ex-
isting design points are included in this count). You must enter a positive number. If the total number of
samples is less than the number of existing points, no any new points will be added. If there are discrete
input parameters, the total number of samples corresponds to the number of points that should be
reached for each combination of discrete parameters.

Sparse Grid Initialization


Sparse Grid Initialization is the DOE type required to run a Sparse Grid Interpolation. Sparse Grid is
an adaptive meta-model driven by the accuracy that you request. It increases the accuracy of the response
surface by automatically refining the matrix of design points in locations where the relative error of the
output parameter is higher. This DOE type generates the levels 0 and 1 of the Clenshaw-Curtis Grid. In
other words, because the Sparse Grid algorithm is based on a hierarchy of grids, the Sparse Grid Initial-
ization DOE type generates a DOE matrix containing all the design points for the smallest required grid:
the level 0 (the point at the current values) plus the level 1 (two points per input parameters).

One advantage to a Sparse Grid design is that it refines only in the directions necessary, so that fewer
design points are needed for the same quality response surface. Another is that Sparse Grid is effective
at handling discontinuities. Although this DOE type is required to build a Sparse Grid response surface,
it can also be used by other types of response surface.

No additional properties are available for the Sparse Grid Initialization DOE type.

Latin Hypercube Sampling Design (LHS)


In the Latin Hypercube Sampling Design DOE type, the DOE is generated by the LHS algorithm, an
advanced form of the Monte Carlo sampling method that avoids clustering samples. In a Latin Hypercube
Sampling, the points are randomly generated in a square grid across the design space, but no two
points share the same value (i.e., so no point shares a row or a column of the grid with any other point).

Possible disadvantages of an LHS design are that extremes (i.e., the corners of the design space) are
not necessarily covered and that the selection of too few design points can result in a lower quality of
response prediction.

Note

The Optimal Space-Filling Design DOE type is an LHS design that is extended with post-
processing. For a comparison of the two, see Comparison of LHS and OSF DOE Types (p. 70).

The following properties are available for the LHS DOE type:

Samples Type: Determines the number of DOE points the algorithm should generate. This option is
suggested if you have some advanced knowledge about the nature of the meta-model. The following
choices are available:

CCD Samples (default): Generates the same number of samples a CCD DOE would generate for the
same number of inputs. You may want to use this to generate a space filling design which has the same
cost as a corresponding CCD design.

Linear Model Samples: Generates the number of samples as needed for a linear meta-model.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
68 of ANSYS, Inc. and its subsidiaries and affiliates.
Number of Input Parameters for DOE Types

Pure Quadratic Model Samples: Generates the number of samples as needed for a pure quadratic
meta-model (no cross terms).

Full Quadratic Samples: Generates the number of samples needed to generate a full quadratic model.

User-Defined Samples: Specify the desired number of samples.

Seed Value: Set the value used to initialize the random number generator invoked internally by the LHS
algorithm. Although the generation of a starting point is random, the seed value consistently results in a
specific LHS. This property allows you to generate different LHS samplings (by changing the value) or to
regenerate the same LHS sampling (by keeping the same value). Defaults to 0.

Number of Samples: Enabled when Samples Type is set to User-Defined Samples. Specifies the default
number of samples. Defaults to 10.

Number of Input Parameters for DOE Types


The number of enabled input parameters has an impact on the generation of a DOE and response
surface. The more input parameters that are enabled, the longer it takes to generate the DOE and update
corresponding design points. Additionally, having a large number of input parameters makes it more
difficult to generate an accurate response surface. The quality of a response surface is dependent on
the relationships that exist between input and output parameters, so having fewer enabled inputs
makes it easier to determine how they impact the output parameters.

As such, the recommendation for most DOE types is to have as few enabled input parameters as possible
(fewer than 20 would be ideal). The exceptions are the CCD and Box-Behnken algorithms; CCD has a
hard limit at 20 and Box-Behnken has a hard limit at 12 parameters. The number of inputs should be
taken into account when selecting a DOE type for your study (or when defining inputs, if you know
ahead of time which DOE type you intend to use).

If you are using a DOE type other than CCD or Box-Behnken and more than the recommended maximum
of 20 inputs are enabled, DesignXplorer shows an alert icon in the Message column of the Outline
view of the Design of Experiments tab, the Response Surface tab, and the Optimization tab for Ad-
aptive Single-Objective (ASO) and Adaptive Multiple-Objective (AMO) optimizations.

The warning icon is displayed only when required component edits are completed. The number next
to the icon indicates the number of active warnings, and you can click on the icon to review the
warning messages. To remove the warning, disable inputs by deselecting them in the Enabled column
until 20 or fewer are still enabled. If you are unsure of which parameters to disable, you can use a
Parameters Correlation study to determine which inputs are least correlated with your results. For more
information, see "Using Parameters Correlation" (p. 51).

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 69
Using Design of Experiments

Besides the number of enabled inputs, however, there are other factors that can affect response surface
generation. For example:

Meta-models (response surfaces) using automatic refinement will add additional design points to
improve the resolution of each output. The more outputs and the more complicated the relationship
between the inputs and outputs, the more design points that will be required.

Increasing the number of output parameters increases the number of response surfaces that are re-
quired.

Discrete input parameters can be expensive because a response surface is generated for each discrete
combination, as well as for each output parameter.

A non-linear or non-polynomial relationship between input and output parameters will require more
design points to build an accurate response surface, even with a small number enabled inputs.

Factors such as these can offset the importance of using a small number of enabled input parameters.
If you expect that a response surface can be generated with relative easefor example, the project has
polynomial relationships between the inputs and outputs, only continuous inputs, and a small number
of outputsyou may decide its worthwhile to exceed the recommended number of inputs. In this
case, you can ignore the warning and proceed with your update.

Comparison of LHS and OSF DOE Types


The Latin Hypercube Design (LHS) and Optimal Space-Filling Design (OSF) methods both create Design
of Experiments (DOE) plans according to a specified criteria.

An LHS design is an advanced form of the Monte Carlo sampling method. In an LHS design, no point
shares a row or column of the design space with any other point.

An OSF des ign is essentially an LHS design that is optimized through several iterations, maximizing the
distance between points to achieve a more uniform distribution across the design space. Because it aims
to gain the maximum insight into the design by using the fewest number of points, it is an effective DOE
choice for complex meta-modeling techniques that use relatively large numbers of design points.

Because OSF incorporates the LHS algorithm, both DOE types aim to conserve optimization resources
by avoiding the creation of duplicate points. Given an adequate number of design points to work with,

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
70 of ANSYS, Inc. and its subsidiaries and affiliates.
Using a Central Composite Design DOE

both methods result in a high quality of response prediction. The OSF algorithm, however, offers the
added benefit of fuller coverage of the design space.

For example, with a two-dimensional problem that has only two input parameters and uses only six
design points, it can be difficult to build an adequate response surface. This is especially true in the
case of LHS because of its nonuniform distribution of design points over the design space.

When the number of design points for the same scenario is increased to twenty, the quality of the
resulting response surface is improved. The LHS method, however, can result in close, uneven groupings
of design points and so can skip parts of the design space. The OSF method, with its maximization of
the distance between points and more uniform distribution of points, addresses extremes more effectively
and provides far better coverage of the design space. For this reason, OSF is the recommended method.

Using a Central Composite Design DOE


In Central Composite Design (CCD), a Rotatable (spherical) design is preferred since the prediction
variance is the same for any two locations that are the same distance from the design center. However,
there are other criteria to consider for an optimal design setup. Among these criteria, there are two
commonly considered in setting up an optimal design using the design matrix.

1. The degree of non-orthogonality of regression terms can inflate the variance of model coefficients.

2. The position of sample points in the design can be influential based on its position with respect to
others of the input variables in a subset of the entire set of observations.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 71
Using Design of Experiments

An optimal CCD design should minimize both the degree of non-orthogonality of term coefficients and
the opportunity of sample points having abnormal influence. In minimizing the degree of non-ortho-
gonality, the Variation Inflation Factor (VIF) of regression terms is used. For a VIF-Optimality design,
the maximum VIF of the regression terms is to be minimized, and the minimum value is 1.0. In minim-
izing the opportunity of influential sample points, the leverage value of each sample points is used.
Leverages are the diagonal elements of the Hat matrix, which is a function of the design matrix. For a
G-Optimality design, the maximum leverage value of sample points is to be minimized.

For a VIF-Optimality design, the alpha value/level is selected such that the maximum VIF is minimum.
Likewise, for a G-Optimality design, the alpha value/level is selected such that the maximum leverage
is minimum. The rotatable design is found to be a poor design in terms of VIF- and G-Efficiencies.

For an optimal CCD, the alpha value/level is selected such that both the maximum VIF and the maximum
leverage are the minimum possible. For the Auto Defined design, the alpha value is selected from
either the VIF- or G-Optimality design that meets the criteria. Since it is a multi-objective optimization
problem, in many cases, there is no unique alpha value such that both criteria reach their minimum.
However, the alpha value is evaluated such that one criterion reaches minimum while another approaches
minimum.

For the current Auto Defined setup (except for a problem with five variables that uses G-Optimality
design) all other multi-variable problems use VIF-Optimality. In some cases, despite the fact that Auto
Defined provides an optimal alpha meeting the criteria, an Auto Defined design might not give as
good of a response surface as anticipated due to the nature of the physical data used for fitting in the
regression process. In that case, you should try other design types that might give a better response
surface approximation.

Note

Default values for CCD Types can be set in the Design Exploration > Design of Experiments
section of the Options dialog box, which is accessible from the Tools menu.

It is good practice to always verify some selected points on the response surface with an actual simulation
evaluation to determine its validity of use for further analyses.. In some cases, a good response surface
does not mean a good representation of an underlying physics problem since the response surface is
generated according to the predetermined sampling points in the design space, which sometimes
misses capturing an unexpected change in some regions of the design space. In that case, you should
try Extended DOE. In Extended DOE, a mini CCD is appended to a standard CCD design, where a second
alpha value is added, and is set to half the alpha value of the Standard CCD. The mini CCD is set up in
a way that the essences of CCD design (rotatability and symmetry) are still maintained. The appended
mini CCD serves two purposes:

1. to capture a drastic change within the design space, if any.

2. to provide a better response surface fit.

The two purposes seem to be conflicting in some cases where the response surface might not be as
good as that of the Standard DOE due to the limitation of a quadratic response surface in capturing a
drastic change within the design space.

The location of the generated design points for the deterministic method is based on a central composite
design. If N is the number of input parameters, then a central composite design consists of:

One center point.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
72 of ANSYS, Inc. and its subsidiaries and affiliates.
Upper and Lower Locations of DOE Points

2*N axis point located at the - and + position on each axis of the selected input parameters.

2(N-f) factorial points located at the -1 and +1 positions along the diagonals of the input parameter space.

The fraction f of the factorial design and the resulting number of design points are given in the following
table:

Number of Generated Design Points as a Function of the Number of Input Parameters

Number of Input Parameters Factorial Number f Number of Design Points


1 0 5
2 0 9
3 0 15
4 0 25
5 1 27
6 1 45
7 1 79
8 2 81
9 2 147
10 3 149
11 4 151
12 4 281
13 5 283
14 6 285
15 7 287
16 8 289
17 9 291
18 9 549
19 10 551
20 11 553

Upper and Lower Locations of DOE Points


The upper and lower levels of the DOE points depend on whether the input variable is a design variable
(optimization) or an uncertainty variable (Six Sigma Analysis). Generally, a response surface will be more
accurate when closer to the DOE points. Therefore, the points should be close to the areas of the input
space that are critical to the effects being examined.

For example, for Goal Driven Optimization, the DOE points should be located close to where the optimum
design is determined to be. For a Six Sigma Analysis, the DOE points should be close to the area where
failure is most likely to occur. In both cases, the location of the DOE points depends upon the outcome
of the analysis. Not having that knowledge at the start of the analysis, you can determine the location
of the points as follows:

For a design variable, the upper and lower levels of the DOE range coincide with the bounds specified
for the input parameter.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 73
Using Design of Experiments

It often happens in optimization that the optimum point is at one end of the range specified for one
or more input parameters.

For an uncertainty variable, the upper and lower levels of the DOE range are the quantile values corres-
ponding to a probability of 0.1% and 99.9%, respectively.

This is the standard procedure whether the input parameter follows a bounded (e.g., uniform) or
unbounded (e.g., Normal) distribution, because the probability that the input variable value will exactly
coincide with the upper or lower bound (for a bounded distribution) is exactly zero. That is, failure
can never occur when the value of the input variable is equal to the upper or lower bound. Failure
typically occurs in the tails of a distribution, so the DOE points should be located there, but not at
the very end of the distribution.

DOE Matrix Generation


The DOE matrix is generated when you Preview or Update your DOE. This matrix consists of generated
design points that are submitted to the parent analysis systems for solution. Output parameter values
are not available until after the Update. You can monitor the progress of the design generation by
clicking the Show Progress button in the lower right corner of the window when the DOE matrix
generation is prolonged; for example, Optimal Space Filling or certain VIF or G-optimal designs.

Note

The design points will be solved simultaneously if the analysis system is configured to perform
simultaneous solutions; otherwise they will be solved sequentially.

To clear the design points generated for the DOE matrix, return to the Project Schematic, right-click
on the Design of Experiments cell, and select Clear Generated Data. You can clear data from any
design exploration cell in the Project Schematic in this way, and regenerate your solution for that cell
with changes to the parameters if you so desire.

Importing and Copying Design Points


If you are using a Custom or Custom+Sampling Design of Experiments, you can import design point
values from an external CSV file or copy existing design points from the Parameter Set cell.

The Import Design Points operation is available in the contextual menu when you right-click:

a Design of Experiments component from the Project Schematic view,

the Design of Experiments node from the Outline view,

on the table of design points, in the Table view of the Design of Experiments tab.

Note

These operations are only available if the type of Design of Experiments is Custom or Custom
+ Sampling.

See Importing Data from a CSV file for more information.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
74 of ANSYS, Inc. and its subsidiaries and affiliates.
Importing and Copying Design Points

Copy all Design Points from the Parameter Set into a DOE Table
1. From the Project Schematic, double click on the Design of Experiments cell where you want to copy
the design points.

2. If needed, click on the Design of Experiments object in the Outline and set the Design of Experiments
Type to Custom or Custom + Sampling.

3. Right-click the Design of Experiments object in the Outline, or right-click in the Table view, and select
Copy all Design Points from the Parameter Set.

All design points from the Parameter Set will be copied to the Design of Experiments table if the values
of the parameters in the design points fall within the defined ranges of the parameters in the DOE. If
some of the parameter values in the design points are out of range, you will be given the option to
automatically expand the parameter ranges in the DOE to accommodate the out of range values. If you
choose not to expand the parameter ranges, only the design points where all parameters fall within
the existing ranges will be copied into the DOE Table. You may also cancel the import action.

Copy Selected Design Points from the Parameter Set


1. From the Project Schematic, double click on the Parameter Set.

2. Select one or more design points from the Table of Design Points.

3. Right-click and select Copy Design Points to: and then select one of the DOE cells listed in the submenu.
Only DOE cells with DOE Type of Custom or Custom + Sampling will be listed.

All selected design points will be copied from the Parameter Set to the selected Design of Experiments
table following the same procedure as above.

Note

If an unsolved design point was previously copied to a custom DOE, and subsequently this
design point is solved in the Parameter Set, you can copy it to the custom DOE again to
push its output values to the Custom DOE Table.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 75
Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
76 of ANSYS, Inc. and its subsidiaries and affiliates.
Using Response Surfaces
The Response Surfaces are functions of different nature where the output parameters are described in
terms of the input parameters. They are built from the Design of Experiments in order to provide quickly
the approximated values of the output parameters, everywhere in the analyzed design space, without
having to perform a complete solution.

The accuracy of a response surface depends on several factors: complexity of the variations of the
solution, number of points in the original Design of Experiments and choice of the response surface
type. ANSYS DesignXplorer provides tools to estimate and improve the quality of the response surfaces.

Once response surfaces are built, you can create and manage response points and charts. These post-
processing tools allow exploring the design and understanding how each output parameter is driven
by input parameters and how the design can be modified to improve its performances.

This section contains information about using the Response Surface:

Meta-Model Types (p. 77)

Meta-Model Refinement (p. 89)

Goodness of Fit (p. 92)

Min-Max Search (p. 98)

Response Surface Charts (p. 99)

Meta-Model Types
Several different meta-modeling algorithms are available to create the response surface:

Standard Response Surface - Full 2nd-Order Polynomial (p. 77) (Default)

Kriging (p. 78)

Non-Parametric Regression (p. 83)

Neural Network (p. 83)

Sparse Grid (p. 84)

Standard Response Surface - Full 2nd-Order Polynomial


This is the default.

Regression analysis is a statistical methodology that utilizes the relationship between two or more
quantitative variables so that one dependent variable can be estimated from the others.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 77
Using Response Surfaces

A regression analysis assumes that there are a total of n sampling points and for each sampling point
the corresponding values of the output parameters are known. Then the regression analysis determines
the relationship between the input parameters and the output parameter based on these sample points.
This relationship also depends on the chosen regression model. Typically for the regression model, a
second-order polynomial is preferred. In general, this regression model is an approximation of the true
input-to-output relationship and only in special cases does it yield a true and exact relationship. Once
this relationship is determined, the resulting approximation of the output parameter as a function of
the input variables is called the response surface.

Forward-Stepwise-Regression
In the forward-stepwise-regression, the individual regression terms are iteratively added to the regression
model if they are found to cause a significant improvement of the regression results. For DOE, a partial
F test is used to determine the significance of the individual regression terms.

Improving the Regression Model with Transformation Functions


Only in special cases can output parameters of a finite element analysis, such as displacements or
stresses, be exactly described by a second-order polynomial as a function of the input parameters.
Usually a second order polynomial provides only an approximation. The quality of the approximation
can be significantly improved by applying a transformation function. By default, the Yeo-Johnson
transformation is used.

If the Goodness of Fit of the response surface is not as good as expected for an output parameter, it is
possible to select a different Transformation Type. The Yeo-Johnson transformation is more numerically
stable in its back-transformation. On the contrary, the Box-Cox transformation is more numerically un-
stable in its back-transformation, but it gives better fit in some cases. Selecting Transformation Type
None means that the standard response surface regression is computed without any transformations.

Kriging
Kriging is a meta-modeling algorithm that provides an improved response quality and fits higher order
variations of the output parameter. It is an accurate multidimensional interpolation combining a poly-
nomial model similar to the one of the standard response surfacewhich provides a global model
of the design spaceplus local deviations so that the Kriging model interpolates the DOE points. The
Kriging meta-model provides refinement capabilities for continuous input parameters, including those
with Manufacturable Values (not supported for discrete parameters). The effectiveness of the Kriging
algorithm is based on the ability of its internal error estimator to improve response surface quality by
generating refinement points and adding them to the areas of the response surface most in need of
improvement.

In addition to manual refinement capabilities, the Kriging meta-model offers an auto-refinement option
which automatically and iteratively updates the refinement points during the update of the Response
Surface. At each iteration of the refinement, the Kriging meta-model evaluates a Predicted Relative Error
in the full parameter space. (DesignXplorer uses Predicted Relative Error instead of Predicted Error because
this allows the same values to be used for all output parameters, even when the parameters have dif-
ferent ranges of variation.) At this step in the process, the Predicted Relative Error for one output
parameter is the Predicted Error of the output parameter normalized by the known Maximum Variation
of the output parameter, as follows:
PredictedRelativeError
PredictedRelativeError =
(Omax Omin )

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
78 of ANSYS, Inc. and its subsidiaries and affiliates.
Meta-Model Types

Where Omax and Omin are the maximum and minimum known values (on design points) of the output
parameter.

For guidelines on when to use the Kriging algorithm, see Changing the Meta-Model (p. 90).

Note

Beginning with ANSYS release 13.0, you do not need to do an initial solve of the response
surface (without refinement points) before an auto-refinement.

Beginning with ANSYS release 14.0, the Interactive refinement type is no longer available for
the Kriging meta-model. If you have an ANSYS 13.0 project with the Kriging Refinement Type
set to either Interactive or Auto, the refinement type will automatically be reset to Manual
as part of the ANSYS release 14.0 migration process. You can leave the refinement type set to
Manual or can change it to Auto after the migration.

How Kriging Auto-Refinement Works


Auto-refinement enables you to refine the Kriging response surface automatically by choosing certain
refinement criteria and allowing an automated optimization-type procedure to add more points to the
design domain where they are most needed.

The prediction of error is a continuous and differentiable function. In order to find the best candidate
refinement point, the refinement process determines the maximum of the prediction function by running
a gradient-based optimization procedure. If the prediction of the accuracy for the new candidate refine-
ment point exceeds the required accuracy, the point is then promoted as a new refinement point.

The auto-refinement process continues iteratively, locating and adding new refinement points until
either the refinement has converged (i.e., the Response Surface is accurate enough for direct output
parameters) or the maximum allowable number of refinement points has been generated.

For more detailed information, see Kriging Algorithms (p. 217) in the DesignXplorer Theory chapter.

Using Kriging Auto-Refinement


When using the Kriging auto-refinement capabilities, there are four main steps to the process: setting
up the response surface, setting up the output parameters, updating the response surface, and gener-
ating verification points. Each step is addressed below.

Setting Up the Response Surface


To set up a Response Surface for Kriging auto-refinement:

1. In the Outline view for the Response Surface, select the Response Surface cell.

2. In the Meta Model section of the Properties view, set the Response Surface Type to Kriging.

3. In the Refinement section of the Properties view:

Set the Refinement Type to Auto.

Enter the Maximum Number of Refinement Points. This is the maximum number of refinement
points that can be generated.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 79
Using Response Surfaces

Enter the Maximum Predicted Relative Error (%). This is the maximum predicted relative error that
is acceptable for all parameters.

4. In the Refinement Advanced Options section of the Properties view:

Select the Output Variable Combinations. This determines how output variables will be considered
in terms of predicted relative error and controls the number of refinement points that will be created
per iteration.

Enter the Maximum Crowding Distance Separation Percentage. This determines the minimum al-
lowable distance between new refinement points.

5. In the Verification Points section of the Properties view:

If you wish to generate verification points, select the Generate verification points check box.

If you selected the Generate Verification Points check box, the Number of Verification Points field
displays with the default value of 1. Enter a value if you want a different number of verification points
to be generated.

For detailed descriptions of Response Surface fields in the Properties view, see Kriging Auto-Refinement
Properties.

Setting up the Output Parameters


1. In the Outline view for the Response Surface, select an output parameter.

2. In the Refinement section of the Properties view for the selected output parameter, select or deselect
the Inherit From Model Settingss check box. This determines whether the Maximum Predicted Relative
Error defined at the Model level will be applicable to this parameter.

3. If the Inherit From Model Settings check box is deselected, enter the Maximum Predicted Relative
Error (i.e., enter the maximum predicted relative error that you will accept) for the output parameter.
This can be different than the Maximum Predicted Relative Error defined at the Model level.

For detailed descriptions of output properties in the Properties view, see Kriging Auto-Refinement
Properties.

Updating the Response Surface


Once youve defined the Response Surface and output parameter properties, complete the refinement
by right-clicking the Response Surface node and selecting Update from the context menu. Alternatively,
in the component editing view, you can click the Update button on the Toolbar.

The generated points for the refinement will appear in the Response Surface Table view under Refine-
ment Points. As the refinement points are updated, the Convergence Curves chart updates dynamically,
allowing you to monitor the progress of the Kriging auto-refinement. See Kriging Convergence Curves
Chart for more information.

The auto-refinement process continues until either the maximum number of refinement points is reached
or the Response Surface is accurate enough for direct output parameters. If all output parameters have
a Predicted Relative Error that is less than the Maximum Predicted Relative Error defined for them in
the Properties view, the refinement is converged.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
80 of ANSYS, Inc. and its subsidiaries and affiliates.
Meta-Model Types

Generating Verification Points


Once the auto-refinement has converged (and particularly at the first iteration without refinement
points), it is recommended that you generate verification points to validate the Response Surface accuracy.

As with the Sparse Grid response surface, Kriging is an interpolation. Goodness of Fit is not a reliable
measure for Kriging because the Response Surface passes through all of the design points, making the
Goodness of Fit appear to be perfect. As such, the generation of verification points is essential for as-
sessing the quality of the Response Surface and understanding the actual Goodness of Fit.

If the accuracy of the verification points is larger than the Predicted Relative Error given by Kriging, you
can insert the verification points as refinement points (this must be done in manual refinement mode)
and then run a new auto-refinement so that the new points will be included in the generation of the
Response Surface. For more information on how to insert verification points as refinement points, see
Using Verification Points (p. 96).

Kriging Auto-Refinement Properties


Refinement Properties
The Refinement properties in the Response Surface Properties view determine the number and the
spread of the refinement points. To access these properties, select Response Surface in the Outline
view. The properties are in the Refinement section of the Properties view.

Maximum Number of Refinement Points: Maximum number of refinement points that can be generated
for use with the Kriging algorithm.

Number of Refinement Points: Number of existing refinement points.

Maximum Predicted Relative Error (%): Maximum predicted relative error that is acceptable for all
parameters.

Predicted Relative Error (%): Predicted relative error for all parameters.

Converged: Indicates the state of the convergence. Possible values are Yes and No.

Refinement Advanced Options


The Refinement Advanced Options in the Response Surface Properties view determine how output
variables are considered and the minimum allowable distance between refinement points. To access
these properties, select Response Surface in the Outline view. The properties are in the Refinement
Advanced Options section of the Properties view.

Output Variable Combinations:

Maximum Output: Only the output with the largest Predicted Relative Error is considered. Only one
refinement point is generated in each iteration.

All Outputs: All outputs are considered. Multiple refinement points are generated in each iteration.

Sum of Outputs: The combined Predicted Relative Error of all outputs is considered. Only one refinement
point is generated in each iteration.

Crowding Distance Separation Percentage: Minimum allowable distance between new refinement
points, implemented as a constraint in the search for refinement points. If two candidate refinement points

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 81
Using Response Surfaces

are closer together than the defined minimum distance, only the first candidate is inserted as a new re-
finement point.

Output Parameter Settings


The output parameter settings in the Refinement section of the Output Parameter Properties view
determine how the Maximum Predicted Relative Error applies to output parameters. Auto-refinement
can be set for one or more output parameters, depending on the Output Variable Combinations se-
lected. To access these settings, select an output parameter in the Outline view. The settings are in
the Refinement section of the Properties view for the output parameter.

Inherit From Model Settings: Indicates if the Maximum Predicted Relative Error defined by the user at
the Model level is applicable for this output parameter. Possible values are Yes and No. Default value is
Yes.

Maximum Predicted Relative Error: Displays only when the Inherit From Model Settings property is
set to No. Maximum predicted relative error that the user will accept for this output parameter. This value
can be different than the Maximum Predicted Relative Error defined at the Model level.

Predicted Relative Error: Predicted Relative Error for this output parameter.

Kriging Convergence Curves Chart


The Kriging Convergence Curves Chart allows you to monitor the automatic refinement process of the
Kriging Response Surface for one or more selected output parameters. The X-axis displays the number
of refinement points used to refine the Response Surface, and the Y-axis displays the percentage of the
Current Predicted Relative Error. The chart is automatically inserted in the Metrics folder in the outline
and is dynamically updated while Kriging refinement runs.

There are two curves for each output parameter: one curve to represent the percentage of the Current
Predicted Relative Error, and one curve to represent the Maximum Predicted Relative Error required
that parameter.

Additionally, there is a single curve that represents the Maximum of the Predicted Relative Error for
output parameters that are not converged.

You can stop/interrupt the process with the progress bar to adjust the requested Maximum Error (or
to change chart properties) during the run.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
82 of ANSYS, Inc. and its subsidiaries and affiliates.
Meta-Model Types

Non-Parametric Regression
Non-parametric regression provides improved response quality and is initialized with one of the available
DOE types.

The non-parametric regression (NPR) algorithm is implemented in ANSYS DesignXplorer as a


metamodeling technique prescribed for predictably high nonlinear behavior of the outputs with respect
to the inputs.

NPR belongs to a general class of Support Vector Method (SVM) type techniques. These are data classi-
fication methods which use hyperplanes to separate data groups. The regression method works almost
similarly, the main difference being that the hyperplane is used to categorize a subset of the input
sample vectors which are deemed sufficient to represent the output in question. This subset is called
the support vector set.

In the current version, the internal parameters of the meta-model are fixed to constant values and are
not optimized. The values are determined from a series of benchmark tests and strike a compromise
between the meta-model accuracy and computational speed. For a large family of problems, the current
settings will provide good results; however, for some problem types (like ones dominated by flat surfaces
or lower order polynomials), some oscillations may be noticed between the DOE points.

In order to circumvent this, you can use a larger number of DOE points or, depending on the fitness
landscape of the problem, use one of the several optimal space filling DOEs provided. In general, it is
suggested that the problems first be fitted with a quadratic response surface, and the NPR fitting adopted
only when the Goodness of Fit metrics from the quadratic response surface model is unsatisfactory.
This will ensure that the NPR is only used for problems where low order polynomials do not dominate.

Neural Network
This mathematical technique is based on the natural neural network in the human brain.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 83
Using Response Surfaces

In order to interpolate a function, we build a network with three levels (Inp, Hidden, Out) where the
connections between them are weighed, like this:

Each arrow is associated with a weight (w) and each ring is called a cell (like a neural).

If the inputs are xi, the hidden level contains function gj (xi) and the output solution is also:

k i
= jk j i

where K is a predefined function, such as the hyperbolic tangent or an exponential based function, in
order to obtain something similar to the binary behavior of the electrical brain signal (like a step function).
The function is continuous and differentiable.

The weight functions (wjk) are issued from an algorithm which minimizes (as the least squares method)
the distance between the interpolation and the known values (design points). This is called the learning.
The error is checked at each iteration with the design points which are not used for the learning. We
need to separate learning design points and error checking design points.

The error decreases and then increases when the interpolation order is too high. The minimization al-
gorithm is stopped when the error is the lowest.

This method uses a limited number of design points to build the approximation. It works better when
the number of design points and the number of intermediate cells are high. And it can give interesting
results with several parameters.

Sparse Grid
The Sparse Grid meta-model provides refinement capabilities for continuous parameters, including
those with Manfacturable Values (not supported for discrete parameters). Sparse Grid uses an adaptive
response surface, which means that it refines itself automatically. A dimension-adaptive algorithm allows
it to determine which dimensions are most important to the objectives functions, thus reducing com-
putational effort.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
84 of ANSYS, Inc. and its subsidiaries and affiliates.
Meta-Model Types

How Sparse Grid Works


Sparse Grid allows you to select certain refinement criteria. When you update the response surface,
Sparse Grid uses an automated local refinement process to determine the areas of the response surface
that are most in need of further refinement. It then concentrates refinement points in these areas, al-
lowing the response surface to reach the specified level of accuracy more quickly and with fewer design
points.

The Sparse Grid adaptive algorithm is based on a hierarchy of grids. The Sparse Grid Initialization
DOE type generates a DOE matrix containing all the design points for the smallest required grid: the
level 0 (the point at the current values) plus the level 1 (two points per input parameters). If the expected
level of quality is not met, the algorithm further refines the grid by building a new level in the corres-
ponding directions. This process is repeated until one of the following things occurs:

The response surface reaches the requested level of accuracy.

The maximum number of refinement points (as defined by the Maximum Number of Refinement Points
property) has been reached.

The maximum depth (the maximum number of levels that can be created in the hierarchy, as defined by
the Maximum Depth property) is reached in one level. Once the maximum depth for a direction is reached,
there is no further refinement in that direction.

The relative error for an output parameter is the error between the predicted and the observed output
values, normalized by the known maximum variation of the output parameter at this step of the process.
Since there are multiple output parameters to process, DesignXplorer computes the worst relative error
value for all of the output parameters and then compares this against the maximum relative error (as
defined by the Maximum Relative Error property). So long as at least one output parameter has a rel-
ative error greater than the expected error, the maximum relative error criterion is not validated.

Sparse Grid Requirements


Sparse Grid response surface requires a specific Clenshaw-Curtis grid that is generated only by the
Sparse Grid Initialization DOE type, so the Sparse Grid response surface can only be used with the
Sparse Grid Initialization DOE; if youve defined another type of DOE, the Sparse Grid response surface
cannot be updated. Note, however, that the Sparse Grid Initialization DOE type can be used by other
types of response surfaces.

Because Sparse Grid uses an automatic refinement algorithm, it is not possible to add refinement points
manually. As a result, for a Sparse Grid response surface:

The Sparse Grid Refinement Type property is set to Auto and cannot be changed.

The Insert as Refinement Point operation is not available from the right-click context menu. Also, if you
use commands to attempt to insert a refinement point, you will receive an error.

Related Topics:

Sparse Grid Algorithms (p. 221)


Using Sparse Grid Refinement
Sparse Grid Auto-Refinement Properties
Sparse Grid Convergence Curves Chart

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 85
Using Response Surfaces

Using Sparse Grid Refinement


When using Sparse Grid auto-refinement capabilities, there are four main steps to the process: setting
up the Design of Experiments, setting up the Response Surface, updating the Response Surface, and
generating verification points. Each step is addressed below.

Setting Up the Design of Experiments


To set up the Design of Experiments for Sparse Grid auto-refinement:

1. In the Outline view for the Design of Experiments, select the Design of Experiments node.

2. In the Design of Experiments section of the Properties view, set the Design of Experiments Type to
Sparse Grid.

3. Either preview or update the Design of Experiments to validate your selections. This will cause the Re-
sponse Surface Type property for downstream Response Surface components to default Sparse Grid.

Note

If the DOE component is shared by multiple systems, the DOE definition will apply to the
Response Surface components for each of those systems.

Setting Up the Response Surface


To set up refinement properties for a Sparse Grid auto-refinement:

1. In the Outline view for the Response Surface, select the Response Surface node.

2. In the Meta Model section of the Properties view, verify that the Response Surface Type property is
set to Sparse Grid.

3. In the Refinement section of the Properties view:

Enter the Maximum Relative Error (%) to be allowed for all of the output parameters. The smaller
the value, the more accurate the response surface will be.

Enter the Maximum Depth or number of grid levels that can be created in a given direction. (You can
also adjust this value later, as needed according to your update results.)

Enter the Maximum Number of Refinement Points. Specify the maximum number of refinement
points that can be generated as part of the refinement process.

For detailed descriptions of Sparse Grid response surface Properties, see Sparse Grid Auto-Refinement
Properties (p. 87).

Updating the Response Surface


Once youve defined the Design of Experiments and Response Surface, update the Response Surface
by clicking the Update button in the toolbar or right-clicking the component cell and selecting Update.
This begins the adaptive process that will generate the Sparse Grid response surface.

At any time, you can interrupt the Sparse Grid refinement via the Progress bar to change properties
or to see partial results; the refinement points that have already been calculated are visible and the

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
86 of ANSYS, Inc. and its subsidiaries and affiliates.
Meta-Model Types

displayed charts are based on the Response Surfaces current level of refinement. The refinement points
that have not been calculated are displayed with an Update Required icon to indicate which output
parameters require an update.

If a design point fails during the refinement process, Sparse Grid stops refining in the area where the
failed design point is located, but continues to refine in the rest of the parameter space to the degree
possible. Failed refinement points are indicated by the Update Failed, Update Required icon. You can
attempt to pass the failed design points by re-updating the Response Surface.

The auto-refinement process continues until the Maximum Relative Error (%) objective is attained,
the Maximum Depth limit is reached for all input parameters, the Maximum Number of Refinement
Points is reached, or the response surface converges. If all output parameters have a Maximum Relative
Error (%) that is higher than the Current Relative Error defined for them in the Properties view, the
refinement is converged.

Note

If the Sparse Grid refinement does not appear to be converging, it is possible to accept the
current level of convergence. To accept the current level of convergence:

1. Stop the process.

2. Either set the Maximum Relative Error (%) value slightly above the Current Relative Error
(%) value or set the Maximum Number of Refinement Points value equal to the Number
of Refinement Points value.

3. Update the Response Surface again.

Generating Verification Points


As with the Kriging response surface, the Sparse Grid is an interpolation. To assess the quality of the
response surface, verification points are needed. Once the Sparse Grid auto-refinement has converged,
it is recommended that you generate verification points to validate the response surface accuracy.

Note

Since the Sparse Grid algorithm is an automatic refinement process, you cannot add refine-
ment points manually as with other DOE types. To generate verification points automatically,
select the Generate Verification Points check box in the Response Surface Properties view
and then update the response surface.

For more detailed information on assessing the quality of a response surface, see Goodness of Fit (p. 92)
and Using Verification Points (p. 96).

Sparse Grid Auto-Refinement Properties


The Refinement properties in the Sparse Grid response surface Properties view determine the number
and the spread of the refinement points.

Refinement Type: Type of refinement process. Defaults to Auto (read-only).

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 87
Using Response Surfaces

Maximum Relative Error (%): Maximum relative error allowable for the response surface. This value is
used to compare against the worst relative error obtained for all output parameters. So long as any output
parameter has a relative error greater than the expected relative error, this criterion is not validated. This
property is a percentage and defaults to 5.

Maximum Depth: Maximum depth (number of hierarchy levels) that can be created as part of the Sparse
Grid hierarchy. Once the number of levels defined in this property is reached for a direction, refinement
does not continue in that direction. Defaults to 4. A minimum value of 2 is required (because the Sparse
Grid Initialization DOE type is already generating levels 0 and 1).

Maximum Number of Refinement Points: Maximum number of refinement points that can be generated
for use with the Sparse Grid algorithm. Defaults to 1000.

Number of Refinement Points: Number of existing refinement points.

Current Relative Error (%): Current level of relative error.

Converged: Indicates the state of the convergence. Possible values are Yes and No.

Sparse Grid Convergence Curves Chart


The Sparse Grid Convergence Curves chart allows you to monitor the automatic refinement process of
the Sparse Grid Response Surface: on X-axis the number of design points used to build the response
surface and on Y-axis the convergence criteria. This chart is automatically inserted in the Metrics folder
in the outline and it is dynamically updated while Sparse Grid runs.

There is one curve per output parameter to represent the current relative error (in percentage), and
one curve to represent the maximum relative error for all direct output parameters. The automatic re-
finement stops when the maximum relative error required (represented as a horizontal threshold line)
has been met.

You can disable one or several output parameters curves and keep only the curve of the maximum re-
lative error.

During the run, you can use the Progress Bar to stop or interrupt the process so that you can adjust
the requested maximum error (or change chart properties) before continuing.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
88 of ANSYS, Inc. and its subsidiaries and affiliates.
Meta-Model Refinement

Meta-Model Refinement
The quality of your response surface is affected by the type of meta-modeling algorithm used to generate
it and is measured by the response surfaces Goodness of Fit. The following sections provide recom-
mendations on making your initial selection of a meta-modeling algorithm, evaluating the meta-model
performance in terms Goodness of Fit, and changing the meta-model as needed to improve the
Goodness of Fit.

Related Topics:
Working with Meta-Models
Changing the Meta-Model
Performing a Manual Refinement

Working with Meta-Models


Prepare Your Model to Enhance Goodness of Fit
Use a sufficient number of design points The number of design points should always exceed the
number of inputs. Ideally, you should have at least twice as many design points as inputs. Most of the
standard DOE types are designed to generate a sufficient number, but custom DOE types may not.

Reduce the number of input parameters Only keep the input parameters that are playing a major
role in your study. Disable any inputs that are not relevant by deselecting them in the Design of Experiments
Outline view. You can determine which are relevant from a correlation or sensitivity analysis.

Requirements and recommendations regarding the number of input parameters vary according to
the DOE type selected. For more information, see Number of Input Parameters for DOE Types (p. 69).

As a general rule, always create verification points To specify that verification points will be created
when the response surface is generated, select the Generate Verification Points check box in the Response
Surface Properties view.

Start with a Standard Response Surface


In practice, its a good idea to always begin with the standard response surface produced by the
Standard Response Surface 2nd-Order Polynomial meta-model. Once the response surface is built,
assess its quality by reviewing its Goodness of Fit information.

Review Goodness of Fit Information


To assess the quality of the response surface, you should review the Goodness of Fit information for
each output parameter. To check Goodness of Fit information:

1. On the Project Schematic, right-click the Response Surface cell and select Edit from the context menu.

2. Under the Metrics node in the response surface Outline view, select Goodness of Fit.

Goodness of Fit information for each output parameter displays in the Table view and is illustrated
in the Predicted vs. Observed chart in the Chart view.

3. Review the Goodness of Fit information for the response surface, paying particular attention to the
Coefficient of Determination property in the Table view. The closer this value is to 1, the better the
response surface. For details on different criteria, see Goodness of Fit Criteria (p. 93).

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 89
Using Response Surfaces

If the Goodness of Fit is acceptable, add verification points and then recheck the Goodness of Fit. If
needed, you can refine the response surface further manually. See Performing a Manual Refine-
ment (p. 92) for details.

If the Goodness of Fit is poor, try changing your meta-model. See Changing the Meta-Model (p. 90)
for details.

Changing the Meta-Model


Changing meta-model types or changing the available options for the current meta-model can improve
the Goodness of Fit. To change your meta-model type:

1. On the Project Schematic, right-click the Response Surface cell and select Edit from the context menu.

2. In the response surface Outline view, select the Response Surface node.

3. Under Meta-Model in the Properties view, select a different value for Response Surface Type.

4. Update the response surface either by clicking the Update button in the Toolbar or by right-clicking the
Response Surface node and selecting Update from the context menu.

5. Review the Goodness of Fit information for each output parameter to see if the new meta-model provides
a better fitting.

Once youve achieved the desired level of Goodness of Fit, add verification points and then recheck
the Goodness of Fit. If needed, you can refine the response surface further manually. See Performing a
Manual Refinement (p. 92) for details.

Guidelines for Changing the Meta-Model


Kriging
If the Standard Response Surface 2nd-Order Polynomial meta-model does not produce a response
surface with the desired level of Goodness of Fit, try the Kriging meta-model.

After updating the response surface with the Kriging method selected, recheck the Goodness of
Fit.

For Kriging, the Coefficient of Determination must have a value of 1. If it does not, this means
that the model is over-constrained and not suitable for refinement via the Kriging algorithm.

Kriging fits the response surface through all the design points, so many of the other metrics will
always be perfect. Therefore, it is particularly important to run verification points with Kriging.

Non-Parametric Regression
If the model is overconstrained and not suitable for refinement via Kriging, try switching to the Non-
Parametric Regression meta-model.

Other Meta-Models
If you decide to use one of the other meta-models, consider your selection carefully to ensure that the
meta-model suits your specific purpose. See Meta-Model Characteristics (p. 91) for details.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
90 of ANSYS, Inc. and its subsidiaries and affiliates.
Meta-Model Refinement

Meta-Model Characteristics
Although it is recommended to always start with the Standard Response Surface 2nd-Order Polyno-
mial meta-model and then switch to the Kriging meta-model, you can opt to use one of the other
meta-model types.

Keep in mind that different meta-model types have varying characteristics, and so lend themselves to
different types of responses; the meta-model used impacts the quality and Goodness of Fit of the res-
ulting response surface. Use the characteristics noted below to guide your selection of the meta-model
most suited to your design scenario.

Once you determine which meta-model has the best fit for your particular application, you could use
that as your new default for similar project in the future.

Standard Response Surface 2nd-Order Polynomial

Default meta-model; creates a Standard Response Surface.

Effective when the variation of the output is smooth with regard to the input parameters.

Sparse Grid

Suited for studies containing discontinuities.

Use when solve is fast.

Non-Parametric Regression

Suited to nonlinear responses.

Use when results are noisy.

Typically slow to compute.

Neural Network

Suited to highly nonlinear responses.

Use when results are noisy.

Control over the algorithm is very limited.

Kriging

Efficient in a large number of cases.

Suited to highly nonlinear responses.

Do NOT use when results are noisy; Kriging is an interpolation that matches the points exactly.

Always use verification points to check Goodness of Fit.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 91
Using Response Surfaces

Performing a Manual Refinement


With the exception of Sparse Grid, the manual refinement capability is available for all of the meta-
model types. Manual refinement is a way to force the response surface to take into account points of
your choice, in addition to the points already in the Design of Experiments.

Basically, after a first optimization study, you may want to insert the best candidate design as a refinement
point in order to improve the response surface quality in this area of the design space.

The refinement points are basically the same as design points since, to obtain the output parameter
values of a refinement point, a design point update (real solve) is performed by DesignXplorer.

You can also create a refinement point easily from existing results, with the operation Insert as Refine-
ment Point which is available in the contextual menu when you select relevant entities in the Table
view or the Chart view with the right mouse button. For instance, you can right-click on a response
point or a Candidate Point and insert it as refinement point.

The refinement points are listed in the table view of the response surface, in a table entitled Refinement
Points. You can insert a new refinement point directly in this table by entering the values of the input
parameters.

To update the refinement points and rebuild the response surface to take them into account them,
click the Update button in the toolbar. Each out-of-date refinement points is updated and then the
response surface is rebuilt from the Design of Experiments points and the refinement points.

With manual refinement, you can insert a refinement point in the refinement points table and beginning
with ANSYS release 13.0 you do not need to do an initial solve of the response surface (without refine-
ment point) before updating your Response Surface with your manual refinement.

You can change the edition mode of the Table of Refinement Points in order to edit the output
parameter values. You can also copy/paste data and import data from a CSV file (right-click and select
Import Refinement Points).

See Working with Tables (p. 205) for more information.

Goodness of Fit
You can view Goodness of Fit information for any of the output parameters in a response surface. To
do so, edit the Response Surface cell from the Project Schematic and select the Goodness of Fit object
under the Metrics section of the Outline view.

If any of the input parameters is discrete, a different response surface is built for each combination of
the discrete levels and the quality of the response surface might be different from on configuration to
another. To review the Goodness of Fit for each of these different response surfaces, select the discrete
parameter values in the Properties view. The Table and Chart views will display the information cor-
responding to the associated response surfaces.

Goodness of Fit is closely related to the meta-model algorithm used to generate the response surface.
If the Goodness of Fit is not of the expected quality, you can try to improve it by adjusting the meta-
model. See Changing the Meta-Model (p. 90) for more information.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
92 of ANSYS, Inc. and its subsidiaries and affiliates.
Goodness of Fit

Goodness of Fit Criteria


In the Table view, the Response Surface Goodness of Fit information is displayed for each output
parameter. The following criteria are calculated for the points taken into account in the construction
of the response surface which are the Design of Experiments points and the refinement points:

Coefficient of Determination (R2): The percent of the variation of the output parameter that can be ex-
plained by the response surface regression equation. That is, the Coefficient of Determination is the ratio
of the explained variation to the total variation. The best value is 1.

The points used to create the response surface are likely to contain variation for each output para-
meter (unless all the output values are the same, which will result in a flat response surface). This
variation is illustrated by the response surface that is generated. If the response surface were to pass
directly through each point (which is the case for the Kriging meta-model), the coefficient of determ-
ination would be 1, meaning that all variation is explained.

Mathematically represented as:


N ^ 2
i i
i =1 (1)
N
i i 2
i =1

where: (p. 94)

Adjusted Coefficient of Determination (Adjusted R2): The Adjusted Coefficient of Determination takes
the sample size into consideration when computing the Coefficient of Determination. The best value is
1.

Usually this is more reliable than the usual coefficient of determination when the number of samples
is small ( <30). Only available for standard response surfaces.


 

 =


 
Mathematically represented as:  =

where: (p. 94)

Maximum Relative Residual: The maximum distance (relatively speaking) out of all of the generated
points from the calculated response surface to each generated point. The best value is 0%; in general,
the closer the value is to 0%, the better quality of the response surface.

However, in some situations, you can have a larger value and still have a good response surface. This
may be true, for example, when the mean of the output values is close to zero. See the formula below.

 

 =:
Mathematically represented as:

where: (p. 94)

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 93
Using Response Surfaces

Root Mean Square Error: This is the square root of the average square of the residuals at the DOE points
for regression methods. The best value is 0; in general, the closer the value is to 0, the better quality of
the response surface.

N ^ 2
i i
Mathematically represented as: i =1

where: (p. 94)

Relative Root Mean Square Error: This is the square root of the average square of the residuals scaled
by the actual output values at the points for regression methods. The best value is 0%; in general, the
closer the value is to 0%, the better quality of the response surface.

However, in some situations, you can have a larger value and still have a good response surface. This
may be true, for example, when part of the output values are close to zero. For example, you can
you can obtain 100% of relative error if the observed value = 1e-10 and the predicted value = 1e-8,
but if the range of output values is 1, this error becomes negligible.

 
 

 = 
Mathematically represented as:

where: (p. 94)

Relative Maximum Absolute Error: This is the absolute maximum residual value relative to the standard
deviation of the actual output data, modified by the number of samples. The best value is 0%; in general,
the closer the value is to 0%, the better quality of the response surface.

Note that this value and the Relative Average Absolute Error value correspond to the Maximum
Error and Average Absolute Error scaled by the standard deviation. For example, the Relative Root
Mean Square Error becomes negligible if both of these values are small.


(   )
 =:
Mathematically represented as: y

where: (p. 94)

Relative Average Absolute Error: This is the average of the residuals relative to the standard deviation
of the actual outputs. This is useful when the number of samples is low ( <30). The best value is 0%; in
general, the closer the value is to 0%, the better quality of the response surface.

Note that this value and the Relative Maximum Absolute Error value correspond to the Maximum
Error and Average Absolute Error scaled by the standard deviation. For example, the Relative Root
Mean Square Error becomes negligible if both of these values are small.



=
Mathematically represented as:

where: (p. 94)

yi = value of the output parameter at the i-th sampling point

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
94 of ANSYS, Inc. and its subsidiaries and affiliates.
Goodness of Fit

i
= value of the output parameter at the i-th sampling point
^

= value of the regression model at the i-th sampling point

is the arithmetic mean of the values 

y
is the standard deviation of the values 

A
= X A
2

= number of sampling points
g

(  Y ) j = m
= number of polynomial terms for a quadratic response surface
(not counting the constant term)

A graphical indication is given in each table cell as to how close the parameter comes to the ideal value
for each Goodness of Fit characteristic. Three gold stars indicate the best match , while three red crosses
indicate the worst.

Note

Root Mean Square Error has no rating because it is not a bounded characteristic.

Related Topics:
Predicted versus Observed Chart
Advanced Goodness of Fit Report
Using Verification Points

Predicted versus Observed Chart


In the Chart view, a scatter chart presents for each output parameter the values predicted from the
response surface versus the values observed from the design points. This chart lets you quickly determine
if the response surface correctly fits the points of the Design of Experiments and the refinement table:
the closer the points are to the diagonal line, the better the response surface fits the points.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 95
Using Response Surfaces

By default, all of the output parameters are displayed on the chart and in this case, the output values
are normalized. Use the check boxes in the Properties view to remove or add an output parameter
from the chart. If only one output parameter is plotted, the values are not normalized.

If you position the mouse cursor over a point of the chart, the corresponding parameter values appear
in the Properties view, including the predicted and observed values for the output parameters.

On the same chart, you also visualize the verification points. These points are not taken into account
to build the response surface, so if they appear close to the diagonal line, the response surface is correctly
representing the parametric model. Otherwise, it means that there are not enough data provided for
the response surface to catch the parametric behavior of the model: it is needed to refine the response
surface.

If you right-click a point of the chart, you can insert it as a refinement point, in order to take it into ac-
count in the response surface. This is a good way to improve the response surface around a verification
point which shows an insufficient Goodness of Fit.

You can add additional Goodness of Fit objects to the Metrics section of the Outline. Simply right-click
on the Metrics folder and select Insert Goodness of Fit. A new Goodness of Fit object will be added
with its own table and chart. You can also delete Goodness of Fit objects from the Outline by right-
clicking on the object and selecting Delete.

It is possible to duplicate a Goodness of Fit by selecting it from the outline view and either choosing
Duplicate in the contextual menu or using the drag and drop mechanism. This operation will attempt
an update of the Goodness of Fit so that the duplication of an up-to-date Goodness of Fit results in an
up-to-date Goodness of Fit.

Advanced Goodness of Fit Report


The Advanced Goodness of Fit report is available only for the standard response surface generated
by the Standard Response Surface 2nd-Order Polynomial meta-model algorithm. It can be displayed
for any output parameter.

To view the Advanced Goodness of Fit report for a given output parameter, right-click on the column
for that parameter in the Table view and select Generate Advanced Goodness of Fit Report from the
context menu.

When reviewing the report, if the Maximum VIF for Full Regression Model value is very high (>10),
this means that there is a high interdependency among the terms and the response surface is not
unique. In other words, the response surface is not reliable in spite of the good R2 value. In this case,
you should add more points to enrich the response surface.

Using Verification Points


Response surfaces are built from a Design of Experiments (DOE). The Goodness of Fit calculations
compare the response surface outputs with the DOE results used to create them. For response surface
types that try to find the "best fit" of the response surface to these DOE points (such as Full 2nd-Order
Polynomial), you can get an idea of how well the fit was accomplished. However, for interpolated response
surface methods that force the response surface to pass through all of the DOE points (such as Kriging),
the Goodness of Fit will usually appear to be perfect. In this case, Goodness of Fit indicates that the
response surface passed through the DOE points used to create it, but does not indicate whether the
response surface captures the parametric solution.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
96 of ANSYS, Inc. and its subsidiaries and affiliates.
Goodness of Fit

A better way of verifying that the response surface accurately approximates the output parameter values
is to compare the predicted and observed values of the output parameters with verification points.
These points are separately calculated after the response surface and are useful in validating any type
of response surface. In particular, you should always use verification points to validate the accuracy of
interpolated response surfaces, such as Kriging or Sparse Grid.

You can specify that verification points are generated automatically by selecting the Generate Verific-
ation Points check box in the Response Surface Properties view. When this check box is selected, the
Number of Verification Points property allows you to specify the number of verification points to be
generated.

You can insert a new verification point directly into the Table of Verification Points. To do so, select
the Response Surface cell on the Response Surface Outline view and then enter the values of the input
parameters into the New Verification Point row of the Table view. Update the verification points and
Goodness of Fit information by right-clicking on the row and selecting Update from the context menu.

After the response surface is created, the verification points are placed in locations that maximize the
distance from existing DOE points and refinement points (Optimal Space-Filling algorithm). A design
point update (i.e., a real solve) calculates each verification point. These verification point results are
then compared with the response surface predictions and the difference is calculated.

The results are displayed in the Goodness of Fit Table view and Chart view. The Table view displays
Goodness of Fit criteria calculated for the verification points (i.e., only the verification points are compared
with the response surface in these calculations). From the Goodness of Fit information, you can determine
whether the response surface accurately approximates output parameter values at each of the verification
point locations. The Chart view also shows the verification points in the Predicted vs. Observed chart.

If a verification point reveals that the current response surface is of a poor quality, you can insert it as
a refinement point so that it is taken into account to improve the accuracy of the response surface. To
do so, select the Response Surface cell on the Response Surface Outline view. Right-click on the veri-
fication point in the Table view and select Insert as Refinement Point from the context menu.

Note

The verification points are not used to build the response surface until they are turned into
refinement points and the response surface is recalculated.

The Insert as Refinement Point option is not available for Kriging and Sparse Grid response
surfaces.

By default, the output parameters of the verification point table are grayed out and are only filled in
by a real solve. However, you can change the edition mode of the Table of Verification Points in order
to edit the output parameter values. This will allow you to enter verification points manually, rather
than by performing on real solves, and still compare them with the response surface. You can also
copy/paste data and import data from a CSV file by right-clicking and selecting Import Verification
Points from the context menu). This is a way to compare either experimental data or data run in another
simulation with the simulation response surface.

See Working with Tables (p. 205) for more information.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 97
Using Response Surfaces

Min-Max Search
The Min-Max Search examines the entire output parameter space from a Response Surface to approx-
imate the minimum and maximum values of each output parameter. In the Outline, if the Enable box
is checked, a Min-Max Search will be performed each time the Response Surface is updated. Uncheck
the box to disable the Min-Max Search feature. You may want to disable this feature in cases where
the search could be very time-consuming (i.e., if there are a large number of input parameters or if
there are discrete or Manufacturable Values input parameters).

Note

If you have discrete or Manufacturable Values input parameters, an alert will be displayed
before a Min-Max Search is performed, reminding you that the search may be time-
consuming. If you do not want to be shown this message before each Min-Max Search
including discrete or Manufacturable Values parameters, you can disable it in the Design
Exploration Messages section of the Workbench Options dialog (accessed via Tools >
Options > Design Exploration).

Before updating your Response Surface, set the options for the Min-Max Search. Click on the Min-Max
Search node and set the following options in the Properties view:

Enter the Number of Initial Samples to generate for the optimization.

Enter the Number of Start Points to use for the Min-Max search algorithm. The more start points
entered, the longer the search time.

Once your Response Surface has been updated, select the Min-Max Search cell in the Outline to display
the sample points that contain the calculated minimum and maximum values for each output parameter
(they will be shown in the Response Surface Table). The maximum and minimum values in the output
parameter Properties view will also be updated based on the results of the search. If the response
surface is updated in any way, including changing the fitting for an output parameter or performing a
refinement, a new Min-Max Search is performed.

The Min-Max Search uses the NLPQL algorithm to search the parameter space for the maximum and
minimum values. Assuming that all input parameters are continuous, one search is done for the minimum

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
98 of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface Charts

value and one for the maximum, and if Number of Starting Points is more than one, two searches are
done (minimum and maximum) for each starting point. To find the minimum or maximum of an output,
the generated sample set is sorted and the "n" first points of the sort are used as the starting points.
The NLPQL algorithm is run twice for each starting point, once for the minimum and once for the
maximum.

If there are discrete input parameters or continuous input parameters with Manufacturable Values, the
number of searches increases by the number of discrete/continuous with Manufacturable Values para-
meter combinations (there will be two searches per combination, one for minimum and one for maxim-
um). So if there are two discrete parameters, one with 4 levels and one with three, the number of
searches done is 4 * 3 * 2 = 24. This example shows a search with only one starting point defined. If
you add multiple starting points, you will multiply the number of searches accordingly.

When you select the Min-Max Search cell in the outline, the sample points that contain the minimum
and maximum calculated values for each output parameter are shown in the Response Surface table.
These sample points can be saved to the project by selecting Insert as Design Point(s) from the right-
click menu. The sample points can also be saved as response points (or as refinement points to improve
the Response Surface) by selecting Explore Response Surface at Point from the right-click context
menu.

The sample points obtained from a Min-Max search are used by the Screening optimization method.
If you run a Screening optimization, the samples are automatically taken into account in the sample
set used to run or initialize the optimization. For details, see Performing a Screening Optimization (p. 130).

You can disable the Min-Max Search cell by deselecting the box in the Enabled column in the Outline.
If you disable the Min-Max Search, no search is done when the Response Surface is updated. If you
disable the search after performing an initial search, the results from the initial search will remain in
the output parameter properties and will be shown in the Response Surface Table view when you click
on the Min-Max Search cell.

Note

If no solutions have occurred yet, no Minimum or Maximum values are shown.

If the Design of Experiments object is solved, but the Response Surface is not solved, the
minimum and maximum values for each output parameter are extracted from the DOE solutions
design points and displayed in the Properties for the output parameters.

For discrete parameters and continuous parameters with Manufacturable Values, there is only
one Minimum and Maximum value per output parameter even if a discrete parameter has many
levels (there is not one Min-Max value set per combination of discrete/continuous with Manu-
facturable Values parameter values).

Response Surface Charts


The Response Surface provides four types of chart allowing the exploration of the design space by
graphically viewing the impact that parameters have on one another: Spider chart, Response chart,
Local Sensitivity chart, and Local Sensitivity Curves chart.

When a Response Surface is updated, one response point and one of each of the chart types are created
automatically. You can insert as many response points and charts as you want, either from the Toolbox
view or from the right-click context menu:

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 99
Using Response Surfaces

When you click on a response point in the Outline view, the Toolbox displays all the chart types that are
available for the cell. To insert another chart, drag a chart template out of the Toolbox and drop it on
the response point.

When you right-click on a response point cell in the Outline view, you can select the Insert option for
the type of chart you want to add.

To duplicate a chart that already exists in the Outline view, you can either:

Right-click on the chart and select Duplicate from the context menu. This operation creates a duplicate
chart under the same response point.

Drag the chart and drop it on a response point. This operation allows you to create a duplicate chart
under the response point of your choice.

Note

Chart duplication triggers a chart update; if the update succeeds, both the original chart and
the duplicate will be up-to-date.

Once youve created a chart, you can change the name of a chart cell by double-clicking it and entering
the new name. (This will not affect the title of the chart, which is set as part of the chart properties.)
You can also save a chart as a graphic by right-clicking it and selecting Save Image As. See Saving a
Chart for more information.

Each chart provides the ability to visually explore the parameter space by using the input parameter
sliders (or drop-down menus for discrete parameters and continuous parameters with Manufacturable

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
100 of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface Charts

Values) available in the chart's Properties view. These sliders allow you to modify an input parameter
value and view its effect on the displayed output parameter.

All of the Response charts under a Response Point on the Chart view use the same input parameter
values because they are all based on the current parameter values for that response point. Thus, when
you modify the input parameter values, the response point and all of its charts are refreshed to take
the new values into account.

Related Topics:

Using the Response Chart (p. 101)

Using the Spider Chart (p. 112)

Using Local Sensitivity Charts (p. 112)

Using the Response Chart


The Response chart is a graphical representation that allows you to see how changes to each input
parameter affect a selected output parameter. In this chart, you select an output parameter to be dis-
played on one axis, and (depending on the chart mode used) select one or more input parameters to
display on the other axes.

Once your input(s) and output are selected, you can use the sliders or enter values to change the values
of the input parameters that are not selected in order to explore how these parameters affect the shape
and position of the curve.

Additionally, you can opt to display all the design points currently in use (from the DOE and the Response
Surface refinement) by selecting the Show Design Points check box in the Properties view. With a
small number of input parameters, this option can help you to evaluate how closely the Response Surface
fits the design points in your project.

Response Chart Modes


The Response chart has three different viewing modes: 2D Contour Graph, 3D Contour Graph, and
2D Slices.

The 2D mode is the 2D Contour Graph, a two-dimensional graphic that allows you to view how changes
to single input impact a single output. For details, see Using the 2D Contour Graph Response Chart (p. 104).

The 3D mode is the 3D Contour Graph, a three-dimensional graphic that allows you to view how changes
to two inputs impact a single output. For details, see Using the 3D Contour Graph Response Chart (p. 105).

The 2D Slices mode combines the benefits of the 2D and 3D Contour Graph modes by compressing all
the data contained in a three-dimensional surface in an easy-to-read two-dimensional format. It displays
an input on the X axis, an input on the Slice axis, and a selected output on the Y axis. The input on the
Slice axis is calculated from either the number of slices defined or the number of Manufacturable Values
or the number of discrete parameter levels. For details, see Using the 2D Slices Response Chart (p. 107).

When you solve a Response Surface, a Response chart is automatically added for the default Response
Point in the Response Surface Outline view. To add another Response chart (this can be either an ad-
ditional chart for the default response point or a chart for a different response point), right-click the
desired Response Point in the Outline view and select Insert Response.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 101
Using Response Surfaces

Understanding the Response Chart Display


The graphical rendering of the Response chart varies according to the type (i.e., the Classification
property and the usage of Manufacturable Values) of the parameters selected. When two input para-
meters are selected for a Response chart, the display is further determined by the specific combination
of parameter types. This section illustrates how parameters of different types and various combinations
are rendered in the three Response chart modes.

Display by Parameter Type


Input parameters can be continuous, discrete, or continuous with Manufacturable Values; the Response
chart displays each type differently, with a graphical rendering that reflects the nature of the parameter
values. For example:

Continuous parameters are represented by colored curves that reflect the continuous nature of the para-
meter values.

Discrete parameters are represented by bars that reflect the discrete nature of the parameter values. There
is one bar for per discrete value.

Continuous parameters with Manufacturable Values are represented by a combination of curves and
markers, with transparent gray curves reflecting the continuous values and colored markers reflecting the
discrete nature of the Manufacturable Values. There is one marker per Manufacturable Value.

The examples below show how each type of parameter is displayed on the 2D Response chart.

Display by Parameter Combination


When two input parameters are selected for the Response chart, different parameter type combinations
are displayed differently, with a graphical rendering that reflects the nature of each parameter.

3D Response Chart Parameter Combinations

The 3D Response chart has two inputs and can have combinations of parameters with like types or
unlike types. Response chart examples for possible combinations are shown below.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
102 of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface Charts

2D Slices Response Chart Parameter Combinations

The 2D Slices Response chart has two inputs. Combinations of these inputs are categorized first by the
parameter type of the select input (the X Axis) and then further distinguished by the parameter type
of the calculated input (the Slice Axis). For each X-axis parameter type, there are two different renderings:

The X axis in conjunction with continuous values (a continuous parameter). In this instance, you specify
the number of curves or slices.

The X-axis in conjunction with discrete values (either a discrete parameter or a Manufacturable Values
parameter). In this instance, the number of slices is automatically set to the number of discrete levels or
the number of Manufacturable Values.

Response chart examples for possible combinations are shown below.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 103
Using Response Surfaces

Using the 2D Contour Graph Response Chart


A Response chart in the 2D Contour Graph mode is a two-dimensional graphic that allows you to view
how changes to single input impact a single output. Essentially, it is a flattened representation of a
three-dimensional Response chart, displaying a selected input on the X axis and a selected output on
the Y axis.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
104 of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface Charts

Viewing the 2-D Response Chart


To view a Response chart in the 2D Contour Graph mode:

1. In the Response Surface Outline view, select the Response chart cell.

2. In the Properties view, under Chart:

Set the Mode property to 2D.

Set chart resolution by entering a value for the Chart Resolution Along X property.

3. In the Properties view, under Axes:

For the X Axis property, select an input parameter.

For the Y Axis property, select an output parameter.

The Response chart will automatically update according to your selections.

For more information on available properties, see Response Chart: Properties (p. 111).

Using the 3D Contour Graph Response Chart


The 3D Contour Graph mode is a three-dimensional graphic that allows you to view how changes to
two inputs impact a single output. It displays a selected input on the X axis, a selected input on the Y
axis, and a selected output on the Z axis.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 105
Using Response Surfaces

Viewing the 3-D Response Chart


1. In the Response Surface Outline view, select the Response chart cell.

2. In the Properties view, under Chart:

Set the Mode property to 3D.

Set chart resolution by entering values for the Chart Resolution Along X and Chart Resolution Along
Y properties.

3. In the Properties view, under Axes:

For the X Axis property, select an input parameter.

For the Y Axis property, select an input parameter.

For the Z Axis property, select an output parameter.

The Response chart will automatically update according to your selections; a smooth three-dimensional
contour of Z versus X and Y displays.

For more information on available properties, see Response Chart: Properties (p. 111).

Manipulating the 3D Response Chart


In the Chart view, the 3D contour can be rotated by clicking and dragging the mouse. Moving the
cursor over the response surface shows the values of Z, X, and Y. The values of other input parameters
can also be adjusted in the Properties view using the sliders, showing different contours of Z. Addition-
ally, the corresponding values of other output parameters can be seen in the Properties view; thus,
instantaneous design and analysis is possible, leading to the generation of additional design points.

The triad control at the bottom left in the 3D Response chart view allows you to rotate the chart in
freehand mode or quickly view the chart from a particular plane. To zoom in or out on any part of the

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
106 of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface Charts

chart, use the normal zoom controls (shift-middle mouse button, or scroll wheel). See Setting Chart
Properties for details.

Using the 2D Slices Response Chart


The 2D Slices mode combines the benefits of the 2D and 3D Contour Graph modes by representing a
three-dimensional surface in a two-dimensional format. It displays an input on the X axis, an input on
the Slice axis, and an output on the Y axis.

The value of the input on the Slice axis is calculated from the number of curves defined for the X and
Y axes.

When at both of the inputs are continuous parameters, you specify the number of slices to be displayed.

When one or both of the inputs are either discrete or continuous with Manufacturable Values, the number
of slices is determined by the number of levels defined for the input parameter(s).

Essentially, the first input on the X axis is varying continuously, while the number of curves or slices
defined for the Slice axis represents the second input. Both inputs are then displayed on the XY plane,
with regard to the output parameter on the Y axis. You can think of the 2D Slices chart as a projection
of the 3D Response Surface curves onto a flat surface.

Viewing the 2D Slices Response Chart


To view a Response chart in the 2D Slices mode:

1. In the Response Surface Outline view, select the Response chart cell.

2. In the Properties view, under Chart:

Set the Mode property to 2D Slices.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 107
Using Response Surfaces

Enter a value for the Chart Resolution Along X property.

If displayed, enter a value for the Number of Slices property.

3. In the Properties view, under Axes:

For the X Axis property, select an input parameter.

For the Slice Axis property, select an input parameter.

For the Y Axis property, select an output parameter.

The Response chart will automatically update according to your selections.

For more information on available properties, see Response Chart: Properties (p. 111).

2D Slices Rendering: Example


To illustrate how the 2D Slices chart is rendered, well start with a 3D Response chart. In our example,
both inputs are continuous parameters. Chart resolution properties default to 25, which means there
are 25 points on the X axis and 25 points on the Y axis.

Now, well rotate the 3D image so the X axis is along the bottom edge of the chart, mirroring the per-
spective of the 2D Slices chart for a better comparison.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
108 of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface Charts

Finally, well switch the chart Mode to 2D Slices. When you compare the chart below to the earlier 3D
version, you can see how the 2D Slices chart is actually a two-dimensional rendering of a three-dimen-
sional image. From the example below, you can see the following things:

Along the Y axis, there are 10 slices, corresponding to the value of the Number of Slices property.

Along the X axis, each slice intersects with 25 points, corresponding to the value of the Chart Resolution
Along X property.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 109
Using Response Surfaces

Response Chart: Example


This example illustrates using the three different modes of Response chart to view the impact of input
parameters on an output parameter.

We will work with the following example:

DOE Type: Sparse Grid Initialization

Input Parameter 1: WB_X (continuous, with range of -2; 2)

Input Parameter 2: WB_Y (continuous with Manufacturable Values, with Manufacturable Values levels
of 1, 1.5, and 1.8, and a range of 1; 1.8)

Output Parameter: WB_Rosenbrock

Design Points: There are 6 design points on the X axis and 6 design points on the Y axis (since there is
an input with Manufacturable Values, the number of slices is determined by the number of levels defined)

The images below show the initial Response chart in all three modes: 2D, 3D, and 2D Slices.

To display the design points from the DOE and the Response Surface refinement, select the Show
Design Points check box in the Properties view. The design points are superimposed on your chart.

When working with Manufacturable Values, you can improve chart quality by extending the parameter
range. Here, well increase it by adding Manufacturable Values of -1 and 3, which become the upper
and lower bounds.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
110 of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface Charts

To improve the quality of your chart further, you can increase the number of points used in building it
by entering values in the Properties view. Below, weve increased the number of points on the X and
Y axes from 6 to 25.

Response Chart: Properties


To specify the properties of a Response chart, select Response in the Response Surface Outline view
and then edit the properties in the Properties view. Note that available properties vary according to
the chart mode and the type of parameters selected.

Chart Properties
Determines the properties of the chart.

Display Full Parameter Name: Determines if the chart displays the full parameter name or the parameter
ID.

Mode: Determines whether the chart is in 2D, 3D, or 2D Slices mode.

Chart Resolution Along X: Determines the number of points on the X-axis. The number of points controls
the amount of curvature that can be displayed. A minimum of 2 points is required and produces a straight
line. A maximum of 100 points is allowed for maximum curvature. Defaults to 25.

Chart Resolution Along Y: Determines the number of points on the Y-axis (3D and 2D Slices modes only).
The number of points controls the amount of curvature that can be displayed. A minimum of 2 points is
required and produces a straight line. A maximum of 100 points is allowed for maximum curvature. Defaults
to 25.

Number of Slices: Determines the number of slices displayed in the 2D Slices chart.

Show Design Points: Determines whether all of the design points currently in use (i.e., both in the Design
of Experiments and from the Response Surface refinement) are used to build the Response Surface.

Axes Properties
Determines what data is displayed on each chart axis. For each axis, under Value, you can change what
the chart displays on an axis by selecting an option from the drop-down.

For the X Axis, available options are each of the input parameters enabled in the project.

For the Y Axis, available options are as follows:

For 2D mode, each of the output parameters in the project.

For 3D mode, each of the input parameters enabled in the project.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 111
Using Response Surfaces

For 2D Slices mode, each of the output parameters in the project.

For the Z Axis (3D mode only), available options are each of the output parameters in the project.

For the Slice Axis (2D Slices mode only), available options are each of the input parameters enabled in
the project. Only available when both input parameters are continuous.

Input Parameters
Each of the input parameters is listed in this section. Under Value, you can change the value of each
input parameter by moving the slider (for continuous parameters) or by entering a new value with the
keyboard (for discrete or Manufacturable Values parameters). The number to the right of the slide rep-
resents the current value.

Output Parameters
Each of the output parameters is listed in this section. Under Value, you can view the interpolated value
for each parameter.

Generic Chart Properties


You can modify various generic chart properties for this chart. See Setting Chart Properties for details.

Using the Spider Chart


Spider charts allow you to visualize the impact that changing the input parameter(s) has on all of the
output parameters simultaneously. When you solve a Response Surface, a Spider chart will appear in
the Outline view for the default response point.

You can use the slider bars in the chart's Properties view to adjust the value for the input parameter(s)
to visualize different designs. You can also enter specific values in the value boxes. The parameter legend
box at the top left in the Chart view allows you to select the parameter that is in the primary (top)
position. Only the axis of the primary parameter will be labeled with values.

Using Local Sensitivity Charts


Local Sensitivity charts allow you to see the impact of continuous input parameters (both with and
without Manufacturable Values) on output parameters. At the Response Surface level, sensitivity charts
are Single Parameter Sensitivities. This means that design exploration calculates the change of the
output(s) based on the change of inputs independently, at the current value of each input parameter.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
112 of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface Charts

The larger the change of the output parameter(s), the more significant is the role of the input parameters
that were varied. As such, single parameter sensitivities are local sensitivities.

Types of Sensitivity Charts


DesignXplorer has two types of sensitivity charts: the standard Local Sensitivity chart and the Local
Sensitivity Curves chart.

The Local Sensitivity chart is a powerful project-level tool, allowing you see at a glance the impact of all
the input parameters on output parameters. For details, see Using the Local Sensitivity Chart (p. 115).

The Local Sensitivity Curves chart helps you to further focus your analysis by allowing you to view inde-
pendent parameter variations within the standard Local Sensitivity chart. It provides a means of viewing
the impact of each input on specific outputs, given the current values of other parameters. For details,
see Using the Local Sensitivity Curves Chart (p. 118).

When you solve a Response Surface, a Local Sensitivity chart and a Local Sensitivity Curves chart are
automatically added for the default response point in the Response Surface Outline view. To add an-
other chart (this can be either an additional chart for the default response point or a chart for a different
response point), right-click the desired response point in the Outline view and select either Insert
Local Sensitivity or Insert Local Sensitivity Curves.

Manufacturable Values in Local Sensitivity Charts


Local sensitivity charts calculate sensitivities for continuous parameters and continuous parameters with
Manufacturable Values, but require that you have a least one parameter that is not a discrete parameter.
If all of your input parameters are discrete, the Local Sensitivity and Local Sensitivity Curves charts are
not available.

For details on how continuous parameters with Manufacturable Values are represented on local sensit-
ivity charts, see Understanding the Local Sensitivities Display (p. 113).

Discrete Parameters in Local Sensitivity Charts


Discrete parameters cannot be enabled as chart variables, but their impact on the output can still be
displayed on the chart. When there is a discrete parameter in the project, the chart calculates the con-
tinuous values and Manufacturable Values sensitivities given their specific combination with the discrete
parameter values.

For discrete parameters, the Properties view includes a drop-down that is populated with the discrete
values defined for the parameter. By selecting different discrete values for each parameter, you can
explore the different sensitivities given different combinations of discrete values. The chart is updated
according to the changed parameter values. You can either check the sensitivities in a single chart, or
create multiple charts to compare the different designs.

Understanding the Local Sensitivities Display


Local sensitivity charts display both continuous parameters and continuous parameters with Manufac-
turable Values. This section illustrates how each type of sensitivity chart renders continuous values and
Manufacturable Values.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 113
Using Response Surfaces

Local Sensitivity Curves Chart Display


On the Local Sensitivity Curves chart, continuous parameters are represented by colored curves, with
a different color for each parameter.

For continuous parameters with Manufacturable Values, the continuous values are represented by a
transparent gray curve, while the Manufacturable Values are represented by colored markers.

In the image below:

Input parameters P2-D and P4-P (the yellow and blue curves) are continuous parameters.

Input parameters P1-B and P3-L (the gray curves) are continuous with Manufacturable Values.

The colored markers indicate the Manufacturable Values defined.

The black markers indicate the location of the response point on each of the input curves.

Local Sensitivity Chart Display


On the Local Sensitivity chart, continuous parameters are represented by colored bars, with a different
color for each parameter.

For continuous parameters with Manufacturable Values, the continuous values are represented by a
gray bar, while the Manufacturable Values are represented by a colored bar in front of it. Each bar is
defined with the min/max extracted from the Manufacturable Values and the average calculated from
the support curve. The min/max of the output can vary according to whether or not Manufacturable
Values are used; in this case, both the colored bar and the gray bar for the input are visible on the chart.

Also, if the parameter range extends beyond the actual Manufacturable Values defined, the bar is topped
with a gray line to indicate the sensitivity obtained while ignoring the Manufacturable Values.

In the image below:

Input parameters P2-D and P4-P (the yellow and blue bars) are continuous parameters.

Input parameters P1-B and P3-L (the red and green bars) are continuous with Manufacturable Values.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
114 of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface Charts

The bars for inputs P1-B and P3-L show differences between the min-max of the output when Manufac-
turable Values are used (the colored bar in front) versus when they are not (the gray bar in back, now
visible).

Using the Local Sensitivity Chart


The Local Sensitivity chart can be a powerful exploration tool. For each output, it allows you to see the
weight of the different input; it calculates the change of the output based on the change of each input
independently, at the current value of each input parameter in the project. The Local Sensitivity chart
can be displayed as a bar chart or a pie chart.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 115
Using Response Surfaces

By default, the Local Sensitivity chart shows the impact of all input parameters on all output parameters,
but it is also possible specify the inputs and outputs to be considered. If you consider only one output,
the resulting chart provides an independent sensitivity analysis for each single input parameter. To
specify inputs and outputs, select or deselect the Enabled check box.

For more information on available properties, see Local Sensitivity Chart: Properties (p. 117).

Local Sensitivity Chart: Example


Below is an example of the Local Sensitivity chart displayed as a bar chart, plotted to show the impact
of inputs P1-WB_Thickness and P2-WB_Radius on the output P4-WB_Deformation. An explanation
of how the bar chart is built follows.

To determine the sensitivity of P1-WB_Thickness vs. P4-WB_Deformation, we:

1. Plot the response curve P4 = f(P1).

2. Compute(Max(P4)-Min(P4))/Avg(P4) ~= 2.3162.

3. If P4 increases while P1 increases, the sign is positive; otherwise it is negative.

In our example we get 2.3162, as shown below. This corresponds to the red bar for P1-WB_Thickness
in the Local Sensitivity chart.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
116 of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface Charts

Now we plot P4 = f(P2), (Max(P4)-Min(P4))/Avg(P4) ~= 1.3316, as shown below. This corresponds to


the blue bar for P2-WB_Radius in the Local Sensitivity chart.

On each of these curves, only one input is varying and the other input parameters are constant, but
you can change the value of any input and get an updated curve. This also applies to the standard
Local Sensitivity chart; all the sensitivity values are recalculated when you change the value of an input
parameter. If parameters are correlated, then youll see the sensitivity varying; in other words, the relative
weights of inputs may vary depending on the design.

Local Sensitivity Chart: Properties


To specify the properties of a local sensitivities chart, first select Local in the Response Surface Outline
view, and then edit the properties in the Properties view.

Chart Properties

Display Full Parameter Name: Determines if the chart displays the full parameter name or the parameter
ID.

Mode: Determines whether the chart is in Pie or Bar format.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 117
Using Response Surfaces

Input Parameters

Each of the input parameters is listed in this section. For each input parameter:

Under Value, you can change the value by moving the slider or entering a new value with the keyboard.
The number to the right of the slide represents the current value.

Under Enabled, you can enable or disable the parameter by selecting or deselecting the check box. Disabled
parameters do not display on the chart.

Output Parameters

Each of the output parameters is listed in this section. For each output parameter:

Under Value, you can view the interpolated value for the parameter.

Under Enabled, you can enable or disable the parameter by selecting or deselecting the check box. Disabled
parameters do not display on the chart.

Generic Chart Properties

You can modify various generic chart properties for this chart. See Setting Chart Properties for details.

Using the Local Sensitivity Curves Chart


The Local Sensitivity Curves chart helps you to focus your analysis by allowing you to view independent
parameter variations within the standard Local Sensitivity chart. This multi-curve chart provides a means
of viewing the impact of each input parameter on specific outputs, given the current values of the
other parameters. The Local Sensitivities Curves chart shows individual local sensitivities, with a separate
curve to represent the impact of each input on one or two outputs.

There are two types of Local Sensitivities Curves chart: Single Output and Dual Output.

The Single Output version calculates the impact of each input parameter on a single output parameter
of your choice.

The Dual Output version calculates the impact of each input parameter on two output parameters of
your choice.

Local Sensitivity Curves Chart: Single Output


By default, the chart opens to the Single Output version, as shown below. In this version:

The X-axis is all selected inputs (normalized).

The Y-axis is a single output parameter.

Each curve represents the impact of an enabled input parameter on the selected output.

For each curve, the current response point is indicated by a black point marker (all of the response points
have equal Y-axis values).

Continuous parameters with Manufacturable Values are represented by gray curves; the colored markers
are the Manufacturable Values defined.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
118 of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface Charts

Where impacts are the same for one or more inputs, the curves are superimposed, with curve of the input
displayed first on the list hiding the curves for the other inputs.

To change the output being considered, go to the Properties view and select a different output para-
meter for the Y axis property.

Local Sensitivity Curves Chart: Dual Output


To view the Dual Output version, go to the Properties view and select an output parameter for both
the X axis and the Y axis properties. In this version:

The X-axis is a selected output parameter.

The Y-axis displays a selected output parameter.

Each curve represents the impact of an enabled input parameter on the two selected outputs.

The circle at the end of a curve represents the beginning of the curve (i.e., the lower bound of the input
parameter).

For each curve, the current response point is indicated by a black point marker.

Continuous parameters with Manufacturable Values are represented by gray curves; the colored markers
are the Manufacturable Values defined.

Where impacts are the same for one or more inputs, the curves are superimposed, with curve of the input
displayed first on the list hiding the curves for the other inputs.

To change the output(s) being considered, go to the Properties view and select a different output
parameter for one or both of the Axes properties.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 119
Using Response Surfaces

For more information on available properties, see Local Sensitivity Curves Chart: Properties (p. 123).

Local Sensitivity Curves Chart: Example


This section explains how to interpret the Local Sensitivity Curves chart and illustrates how it is related
to the Local Sensitivity chart for the same response point.

Below is an example of a Local Sensitivity bar chart for a given response point. This chart shows the
impact of four input parameters (P1-B, P2-D, P4-P, and P5-E) on two output parameters (P6-V and P8-
DIS).

On the left side of the chart, you can see how input parameters impact the output parameter P6-V:

P2-D (the yellow bar) has the most impact and the impact is positive.

P1-B (the red bar) has a moderate impact, and impact is positive.

Input parameters P4-P and P5-E (the teal and blue bars) have no impact at all.

On the right side of the chart, you can the difference in how the same input parameters impact the
output parameter P8-DIS:

P2-D (the yellow bar) has the most impact, but the impact is negative.

P1-B (the red bar) has a moderate impact, but the impact is negative.

P4-P (the teal bar) has now has a moderate impact, and the impact is positive.

P5-E (the blue bar) now has a moderate impact, and the impact is negative.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
120 of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface Charts

Single Output

When you view the Local Sensitivity Curves chart for the same response point, the chart defaults to the
Single Output version (i.e., it shows the impact of all enabled inputs on a single selected output). The
output parameter is on the Y-axis, with impact measured horizontally.

In the two examples below, you can see that the Single Output curve charts for output parameters P6-
V and P8-DIS show the same sensitivities as the Local Sensitivity bar chart.

Note

For output P6-V, inputs P4-P and P5-E have the same level of impact, so the blue line is
hidden behind the teal line in the chart below.

Note

For output P8-DIS, inputs P1-B and P5-E have the same level of impact, so the blue line is
hidden behind the red line in the chart below.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 121
Using Response Surfaces

Dual Output

For more detailed information, you can view the Dual Output version of the Local Sensitivity Curves
chart (i.e., it shows the impact of all enabled inputs on two selected outputs). In this particular example,
there are only two output parameters. If there were six outputs in the project, however, you could
narrow the focus of your analysis by selecting the two that are of most interest to you.

The curves chart below shows the impact of the same input parameters on both of the output parameters
weve been using. Output P6-V is on the X-axis, with impact measured vertically. Output P8-DIS on the
Y-axis, with impact measured horizontally. From this dual representation, you can see the following
things:

P2-D has the most significant impact on both outputs. The impact is positive for output P6-V, and negative
for output P8-DIS.

P1-B has a moderate impact on both outputs. The impact is positive for output P6-V, and negative for
output P8-DIS.

P4-P has no impact on output P6-V and a moderate positive impact on output P8-DIS.

P5-E has no impact on output P6-V and a moderate negative impact on output P8-DIS. Due to duplicate
impacts, its curve is hidden for both outputs.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
122 of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface Charts

Local Sensitivity Curves Chart: Properties


To specify the properties of a local sensitivities chart, first select Local Sensitivities Curves in the Re-
sponse Surface Outline view, and then edit the properties in the Properties view.

Chart Properties

Determines the properties of the chart.

Display Full Parameter Name: Determines if the chart displays the full parameter names or the parameter
number.

Axis Range: Determines ranging of the output parameters axes (i.e., the lower and upper bounds of the
axes).

If the Use Min Max of the Output Parameter option is selected, the axes bounds correspond to the
range given by the Min-Max Search results. If the Min-Max Search object was disabled, the output
parameter bounds are determined from existing design points.

If the Use Curves Data option is selected, the axes bounds match the ranges of the curves.

Chart Resolution: Determines the number of points per curve. The number of points controls the amount
of curvature that can be displayed. A minimum of 2 points is required and produces a straight line. A
maximum of 100 points is allowed for maximum curvature. Defaults to 25.

Axes Properties

Determines what data is displayed on each chart axis. For each axis, under Value, you can change what
the chart displays on an axis by selecting an option from the drop-down.

For the X-Axis, available options are Input Parameters and each of the output parameters defined in the
project. (If Input Parameters is selected, you are viewing a Single Output chart; otherwise, the chart is
Dual Output.)

For the Y-Axis, available options are each of the output parameters defined in the project.

Input Parameters

Each of the input parameters is listed in this section. For each input parameter:

Under Value, you can change the value by moving the slider or entering a new value with the keyboard.
The number to the right of the slider represents the current value.

Under Enabled, you can enable or disable the parameter by selecting or deselecting the check box. Disabled
parameters do not display on the chart.

Note

Discrete input parameters display with their levels values associated to a response point, but
cannot be enabled as chart variables.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 123
Using Response Surfaces

Output Parameters

Each of the output parameters is listed in this section. Under Value, you can view the interpolated value
for each parameter.

Generic Chart Properties

You can modify various generic chart properties for this chart. See Setting Chart Properties for details.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
124 of ANSYS, Inc. and its subsidiaries and affiliates.
Using Goal Driven Optimization
This section contains information about running a Goal Driven Optimization analysis. DesignXplorer
offers two different types of Goal Driven Optimization systems: Response Surface Optimization and
Direct Optimization.

A Response Surface Optimization system draws its information from its own Response Surface
component, and so is dependent on the quality of the response surface. The available optimization
methods (Screening, MOGA, NLPQL, and MISQP) utilize response surface evaluations, rather than real
solves.

A Direct Optimization system is a single-component system which utilizes real solves. When used in
a Direct Optimization context, the available optimization methods (Screening, , NLPQL, MISQP, Adaptive
Single-Objective, and Adaptive Multiple-Objective) utilize real solves, rather than response surface
evaluations.

A Direct Optimization system does not have an associated Response Surface component, but can draw
its information from any other system or component that contains design point data. It is possible to
reuse existing design point data, reducing the time needed for the optimization, without altering the
source of the design points. For example:

You can transfer design point data from an existing Response Surface Optimization and improve
upon the optimization without actually changing the original response surface.

You can use information from a Response Surface that has been refined with the Kriging method
and validated. You can transfer the design point data to a Direct Optimization system and then
adjust the quality of the original response surface without affecting the attached direct optimization.

You can transfer information from any DesignXplorer system or component containing design point
data that has already been updated, saving time and resources by reusing existing, up-to-date data
rather than reprocessing it.

Note

The transfer of design point data between two Direct Optimization systems is not
supported.

A Direct Optimization system also allows you to monitor its progress by watching the Table view.
During a Direct Optimization, the Table view displays all the design points being calculated by the
optimization. The view is refreshed dynamically, allowing you to see how the optimization proceeds,
how it converges, etc. Once the optimization is complete, the raw design point data is stored for future

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 125
Using Goal Driven Optimization

reference. You can access the data by clicking the Raw Optimization Data node in the Outline view
to display in the Table view a listing of the design points that were calculated during the optimization.

Note

This list is compiled of raw design point data only; no analysis is applied and it does
not show feasibility, ratings, Pareto fronts, etc. for the included points.

For more information, see Transferring Design Point Data for Direct Optimization (p. 127).

Related Topics:
Creating a Goal Driven Optimization System
Transferring Design Point Data for Direct Optimization
Goal Driven Optimization Methods
Defining the Optimization Domain
Defining Optimization Objectives and Constraints
Working with Candidate Points
Goal Driven Optimization Charts and Results

Creating a Goal Driven Optimization System


To create a Goal Driven Optimization system on your Project Schematic:

1. With the Project Schematic displayed, drag either a Response Surface Optimization or a Direct Op-
timization template from the Design Exploration area of the toolbox and drop it on the Project
Schematic. You may drag it:

directly under the Parameter Set bar or an existing system directly under the Parameter Set bar, in
which case it will not share any data with any other systems in the Project Schematic.

onto the Design of Experiments component of a system containing a Response Surface, in which
case it will share all of the data generated by the DOE component.

onto the Response Surface component of a system containing a Response Surface, in which case it
will share all of the data generated for the DOE and Response Surface components.

For more detailed information on data transfer, see Transferring Design Point Data for Direct Optim-
ization (p. 127).

2. For a Response Surface Optimization system, if you are not sharing the DOE and Response Surface
cells, edit the DOE, set it up as described in Design of Experiments Component Reference (p. 29), and
solve both the DOE and Response Surface cells.

3. For a Direct Optimization system, if you have not already shared data via the options in Step 1, you
have the option of creating data transfer links to provide the system with design point data.

Note

If no design point data is shared, design point data will be generated automatically by
the update.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
126 of ANSYS, Inc. and its subsidiaries and affiliates.
Transferring Design Point Data for Direct Optimization

4. On the Project Schematic, double-click on the Optimization cell of the new system to open the Optim-
ization tab.

5. Specify the optimization method.

In the Optimization tab Outline view, select the Optimization node.

In the Properties view, select an optimization method and specify its optimization properties. For
details on the available optimization algorithms, see Goal Driven Optimization Methods (p. 128).

6. Specify optimization objectives or constraints.

In the Optimization tab Outline view, select either the Objectives and Constraints node or an item
underneath it.

In the Table or Properties view, define the Optimization objectives and constraints. For details, see
Defining Optimization Objectives and Constraints (p. 153).

7. Specify the optimization domain.

In the Optimization tab Outline view, select the Domain node or an input parameter or parameter
relationship underneath it.

In the Table or Properties view, define the selected domain object. For details, see Defining the Op-
timization Domain (p. 148).

8. Click the Update toolbar button.

Transferring Design Point Data for Direct Optimization


In DesignXplorer, you can transfer design point data to a Direct Optimization system by creating one
or more data transfer links from a component containing design point data to the systems Optimization
component.

Note

The transfer of design point data between two Direct Optimization systems is not supported.

Data Transferred
The data transferred consists of all design points that have been obtained by a real solution; design
points obtained from the evaluation of a response surface are not transferred. Design point data is
transferred according to the nature of its source component.

Design of Experiments component: All points, including those with custom output values.

Response Surface component: All refinement and verification points used to create or evaluate from
the Response Surface, since these are obtained by a real solution.

Note

The points from the DOE are not transferred; to transfer these points, create a data
transfer link from the DOE component.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 127
Using Goal Driven Optimization

Parameters Correlation component (standalone): All points.

Parameters Correlation component (linked to a Response Surface component): No points, because


all of them are obtained from a response surface evaluation, rather than a real solve.

Data Usage
Once transferred, the design point data is stored in an initial sample set, as follows:

If the Direct Optimization method is NLPQL or MISQP, the samples are not used.

If the Direct Optimization method is Adaptive Single-Optimization, the initial sample set is filled with
transferred points and additional points are generated to reach the requested number of samples and
the workflow of LHS.

If the Direct Optimization method is or Adaptive Multiple-Objective, the initial sample set is filled with
transferred points and additional points are generated to ready the requested number of samples.

The transferred points are added to the samples generated to initiate the optimization. For example, if
you have requested 100 samples and 15 points are transferred, a total 115 samples will be available to
initiate the optimization.

When there are duplicates between the transferred points and the initial sample set, the duplicates are
removed. For example, if you have requested 100 samples, 15 points are transferred, and 6 duplicates
are found, a total of 109 samples will be available to initiate the optimization.

For more information on data transfer links, see Project Schematic Links in the Workbench User's Guide.

Goal Driven Optimization Methods


DesignXplorer offers the following Goal Driven Optimization methods:

Method Description Response Sur- Direct Optim-


face Optimiza- ization
tion
Screening Shifted-Hammersley
X X
Sampling
NLPQL Nonlinear Programming
X X
by Quadratic Lagrangian
MISQP Mixed-Integer Sequential
X X
Quadratic Programming
Multi-Objective Genetic
X X
Algorithm
Adaptive Single- Hybrid optimization
Objective method using Optimal
Space-Filling Design, a
Kriging response surface, X
MISQP, and domain reduc-
tion in a Direct Optimiza-
tion system

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
128 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal Driven Optimization Methods

Adaptive Mul- Hybrid optimization


tiple-Objective method using a Kriging
response surface and in a X
Direct Optimization sys-
tem
External Optim- External optimizer as defined Availability is determined by the optim-
izer in a loaded optimization exten- izer, as defined in the optimization exten-
sion sion.

The table below shows the general capabilities of each method:

Method Single Mul- Loc- Glob- Dis- Man- Para-


Ob- tiple al al crete u- met-
ject- Ob- Search Search fac- er
ive ject- tur- Rela-
ives able tion-
Val- ships
ues
Screen-
X X X X X
ing
NLPQL X X X
MISQP X X X X X
X X X X X
Adapt-
ive
Single- X X X
Object-
ive
Adapt-
ive Mul-
tiple- X X X X
Object-
ive
External Capabilities are determined by the optimizer, as defined in the
Optim- optimization extension.
izer

The optimization method for a design study is selected via the drop down menu for the Method Name
property (in the Properties view of the Optimization tab). DesignXplorer filters the Method Name list
for applicability to the current projecti.e., it displays only those optimization methods that can be
used to solve the optimization problem as it is currently defined. For example, if your project has multiple
objectives defined and an external optimizer does not support multiple objectives, the optimizer will
not be included in the option list. When no objectives or constraints are defined for a project, all optim-
ization methods are displayed in the Method Name drop down. If you already know that you want to
use a particular external optimizer, it is recommended that you select it as the method before setting
up the rest of the project. Otherwise, the optimization method could be inadvertently filtered from the
list.

The following sections give instructions for selecting and specifying the optimization properties for
each algorithm.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 129
Using Goal Driven Optimization

Performing a Screening Optimization


Performing a MOGA Optimization
Performing an NLPQL Optimization
Performing an MISQP Optimization
Performing an Adaptive Single-Objective Optimization
Performing an Adaptive Multiple-Objective Optimization
Performing an Optimization with an External Optimizer

Performing a Screening Optimization


The Screening option can be used for both Response Surface Optimization and Direct Optimization. It
allows you to generate a new sample set and sort its samples based on objectives and constraints. It
is a non-iterative approach that is available for all types of input parameters. Usually the Screening ap-
proach is used for preliminary design, which may lead you to apply the or NLPQL options for more refined
optimization results. See the Principles (GDO) (p. 225) theory section for more information.

To perform a Screening optimization:

1. In the Optimization Outline view, select the Optimization node.

2. Enter the following input properties in the Properties view:

Method Name: Set to Screening.

Number of Samples: Enter the number of samples to generate for the optimization. Note that samples
are generated very rapidly from the response surface and do not require an actual solve of any design
points.

Must be greater than or equal to the number of enabled input and output parameter; the number
of enabled parameters is also the minimum number of samples required to generate the Sensit-
ivities chart. You can enter a minimum of 2 and a maximum of 10,000. Defaults to 1000 for a
Response Surface Optimization and 100 for a Direct Optimization.

Maximum Number of Candidates: Defaults to 3. The maximum possible number of candidates to be


generated by the algorithm. For details, see Viewing and Editing Candidate Points in the Table
View (p. 158).

Verify Candidate Points: Select to verify candidate points automatically at end of an update for a
Response Surface Optimization; this property is not applicable to Direct Optimization.

3. Specify optimization objective and constraints.

In the Optimization tab Outline view, select the Objectives and Constraints node or an item under-
neath it.

In the Table or Properties view, define the Optimization objectives and constraints. For details, see
Defining Optimization Objectives and Constraints (p. 153).

4. Specify the optimization domain.

In the Optimization tab Outline view, select the Domain node or an input parameter or parameter
relationship underneath it.

In the Table or Properties view, define the selected domain object. For details, see Defining the Op-
timization Domain (p. 148).

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
130 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal Driven Optimization Methods

5. Click the Update toolbar button or right-click the Optimization node in the Outline view and select
the Update option.

The result is a group of points or sample set. The points that are most in alignment with the ob-
jectives and constraints are displayed in the table as the Candidate Points for the optimization.
The result also displays the read-only field Size of Generated Sample Set in the Properties view.

For a Screening Response Surface optimization, the sample points obtained from a response surface
Min-Max search are automatically added to the sample set used to initialize or run the optimization.
For example, if the Min-Max search results in 4 points and the user runs a Screening optimization
with Number of Samples set to 100, the final optimization sample set will contain up to 104 points.
If a point found by the Min-Max search is also contained in the initial Screening sample set, it is only
counted once in the final sample set.

Note

When constraints exist, the Tradeoff chart will indicate which samples are feasible (meet
the constraints) or are infeasible (do not meet the constraints). There is a display option
for the infeasible points.

6. Select the Optimization node in the Outline view and review optimization details in the Optimization
Status section of the Properties view. The following output properties are displayed:

Number of Evaluations: Number of design point evaluations performed. This value takes all points
used in the optimization, including design points pulled from the cache. Can be used to measure the
efficiency of the optimization method to find the optimum design point.

Number of Failures: Number of failed design points for the optimization. When design points fail, a
Screening optimization does not attempt to solve additional design points in their place and does not
include them on the Direct Optimization Samples chart.

Size of Generated Sample Set: Indicates the number of samples generated in the sample set. For
Direct Optimization, this is the number of samples successfully updated. For Response Surface Optim-
ization, this is the number of samples successfully updated plus the number of different (non-duplicate)
samples generated by the Min-Max Search (if enabled).

Number of Candidates: Number of candidates obtained. (This value is limited by the Maximum
Number of Candidates input property.)

7. In the Outline view, you can select the Domain node or any object underneath it to review domain
details in the Properties and Table views.

8. For a Direct Optimization, you can select the Raw Optimization Data node in the Outline view to display
a listing of the design points that were calculated during the optimization in the Table view. If the raw
optimization data point exists in the Design Point Parameter Set, then the corresponding design point
name is indicated in parentheses under the Name column.

Note

This list is compiled of raw data and does not show feasibility, ratings, etc. for the included
design points.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 131
Using Goal Driven Optimization

Performing a MOGA Optimization


The MOGA (Multi-Objective Genetic Algorithm) option can be used for both Response Surface Optim-
ization and Direct Optimization. It allows you to generate a new sample set or use an existing set for
providing a more refined approach than the Screening method. It is available for all types of input
parameters and can handle multiple goals. See the Multi-Objective Genetic Algorithm (MOGA) (p. 246)
theory section for more information.

Note

To display advanced options, go to Tools > Options > Design Exploration > Show Advanced
Options.

To perform a MOGA optimization:

1. In the Optimization Outline view, select the Optimization node.

2. Enter the following input properties in the Properties view:

Method Name: Set to MOGA.

Type of Initial Sampling: Advanced option that allows you to generate different kinds of sampling.
If you do not have any parameter relationships defined, specify Screening or OSF. Defaults to
Screening. If you have parameter relationships defined, the initial sampling must be performed by
the constrained sampling algorithms (because parameter relationships constrain the sampling) and
this property is automatically set to Constrained Sampling.

Random Generator Seed: Advanced option that displays only when Type of Initial Sampling is set to
OSF. Specify the value used to initialize the random number generator invoked internally by the OSF
algorithm. The value must be a positive integer. This property allows you to generate different samplings
(by changing the value) or to regenerate the same sampling (by keeping the same value). Defaults to
1.

Maximum Number of Cycles: Advanced option that displays only when Type of Initial Sampling is
set to OSF. Determines the number of optimization loops the algorithm needs, which in turns determines
the discrepancy of the OSF. The optimization is essentially combinatorial, so a large number of cycles
will slow down the process; however, this will make the discrepancy of the OSF smaller. The value
must be greater than 0. For practical purposes, 10 cycles is usually good for up to 20 variables. Defaults
to 10.

Number of Initial Samples: Specify the initial number of samples to be used. Must be greater than
or equal to the number of enabled input and output parameters. The minimum recommended number
of initial samples is 10 times the number of input parameters; the larger the initial sample set, the
better your chances of finding the input parameter space that contains the best solutions.

Must be greater than or equal to the number of enabled input and output parameters; the number
of enabled parameters is also the minimum number of samples required to generate the Sensit-
ivities chart. You can enter a minimum of 2 and a maximum of 10,000. Defaults to 100.

If you are switching from the Screening method to the MOGA method, MOGA generates a new
sample set. For the sake of consistency, enter the same number of initial samples as used for the
Screening optimization.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
132 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal Driven Optimization Methods

Number of Samples Per Iteration: The number of samples that are iterated and updated with each
iteration. This setting must be greater than or equal to the number of enabled input and output
parameters, but less than or equal to the number of initial samples. Defaults to 100 for a Response
Surface Optimization and to 50 for a Direct Optimization.

You can enter a minimum of 2 and a maximum of 10,000.

Maximum Allowable Pareto Percentage: Convergence criterion. A percentage which represents the
ratio of the number of desired Pareto points to the Number of Samples Per Iteration. When this per-
centage is reached, the optimization is converged. For example, a value of 70% with Number of Samples
Per Iteration set at 200 samples would mean that the optimization should stop once the resulting
front of the MOGA optimization contains at least 140 points. Of course, the optimization stops before
that if the Maximum Number of Iterations is reached.

If the Maximum Allowable Pareto Percentage is too low (below 30) then the process may converge
prematurely, and if the value is too high (above 80) it may converge slowly. The value of this
property depends on the number of parameters and the nature of the design space itself. Defaults
to 70. Using a value between 55 and 75 works best for most problems. For details, see Convergence
Criteria in MOGA-Based Multi-Objective Optimization (p. 245).

Convergence Stability Percentage: Convergence criterion. A percentage which represents the stability
of the population, based on its mean and standard deviation. Allows you to minimize the number of
iterations performed while still reaching the desired level of stability. When this percentage is reached,
the optimization is converged. Defaults to 2%. To not take the convergence stability into account, set
to 0%. For details, see Convergence Criteria in MOGA-Based Multi-Objective Optimization (p. 245).

Maximum Number of Iterations: Stop criterion. The maximum possible number of iterations the al-
gorithm executes. If this number is reached without the optimization having reached convergence,
iteration will stop. This also provides an idea of the maximum possible number of function evaluations
that are needed for the full cycle, as well as the maximum possible time it may take to run the optim-
ization. For example, the absolute maximum number of evaluations is given by: Number of Initial
Samples + Number of Samples Per Iteration * (Maximum Number of Iterations - 1).

Mutation Probability: Advanced option allowing you to specify the probability of applying a mutation
on a design configuration. The value must be between 0 and 1. A larger value indicates a more random
algorithm; if the value is 1, the algorithm becomes a pure random search. A low probability of mutation
(<0.2) is recommended. Defaults to 0.01. For more information on mutation, see MOGA Steps to
Generate a New Population (p. 248).

Crossover Probability: Advanced option allowing you to specify the probability with which parent
solutions are recombined to generate the offspring solutions. The value must be between 0 and 1. A
smaller value indicates a more stable population and a faster (but less accurate) solution; if the value
is 0, then the parents are copied directly to the new population. A high probability of crossover (>0.9)
is recommended. Defaults to 0.98.

Type of Discrete Crossover: Advanced option allowing you to determine the kind of crossover for
discrete parameters. Three crossover types are available: One Point, Two Points, or Uniform. According
to the type of crossover selected, the children will be closer to or farther from their parents (closer for
One Point and farther for Uniform). Defaults to One Point. For more information on crossover, see
MOGA Steps to Generate a New Population (p. 248).

Maximum Number of Candidates: Defaults to 3. The maximum possible number of candidates to be


generated by the algorithm. For details, see Viewing and Editing Candidate Points in the Table
View (p. 158).

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 133
Using Goal Driven Optimization

Verify Candidate Points: Select to verify candidate points automatically at end of the Optimization
update.

3. Specify optimization objectives and constraints.

In the Optimization tab Outline view, select the Objectives and Constraints node or an item under-
neath it.

In the Table or Properties view, define the Optimization Objectives. For details, see Defining Optim-
ization Objectives and Constraints (p. 153).

Note

For the MOGA method, at least one output parameter must have an objective defined.
Multiple objectives are allowed.

4. Specify the optimization domain.

In the Optimization tab Outline view, define the domain input parameters by enabling the desired
input parameters under the Domain node.

In the Table or Properties view, define the selected domain object. For details, see Defining the Op-
timization Domain (p. 148).

5. Click the Update toolbar button or right-click the Optimization node in the Outline view and select
the Update option.

The result is a group of points or sample set. The points that are most in alignment with the ob-
jectives are displayed in the table as the Candidate Points for the optimization. The result also
displays the read-only field Size of Generated Sample Set in the Properties view.

6. Select the Optimization node in the Outline view and review convergence details in the Optimization
Status section of the Properties view. The following output properties are displayed:

Converged: Indicates whether the optimization has converged.

Obtained Pareto Percentage: A percentage which represents the ratio of the number of Pareto points
obtained by the optimization.

Number of Iterations: Number of iterations executed. Each iteration corresponds to the generation
of a population.

Number of Evaluations: Number of design point evaluations performed. This value takes all points
used in the optimization, including design points pulled from the cache. Can be used to measure the
efficiency of the optimization method to find the optimum design point.

Number of Failures: Number of failed design points for the optimization. When a design point fails,
a MOGA optimization does not retain this point in the Pareto front and does not attempt to solve
another design point in its place. Failed design points are also not included on the Direct Optimization
Samples chart.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
134 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal Driven Optimization Methods

Size of Generated Sample Set: Number of samples generated in the sample set. This is the number
of samples successfully updated for the last population generated by the algorithm; usually equals
the Number of Samples Per Iteration.

Number of Candidates: Number of candidates obtained. (This value is limited by the Maximum
Number of Candidates input property.)

7. In the Outline view, you can select the Domain node or any object underneath it to review domain
details in the Properties and Table views.

8. For a Direct Optimization, you can select the Raw Optimization Data node in the Outline view to display
a listing of the design points that were calculated during the optimization in the Table view. If the raw
optimization data point exists in the Design Point Parameter Set, then the corresponding design point
name is indicated in parentheses under the Name column.

Note

Note that this list is compiled of raw data and does not show feasibility, ratings, etc. for
the included design points.

Performing an NLPQL Optimization


The NLPQL (Nonlinear Programming by Quadratic Lagrangian) option can be used for both Response
Surface Optimization and Direct Optimization. It allows you to generate a new sample set to provide a
more refined approach than the Screening method. It is available for continuous input parameters only
and can only handle one output parameter goal (other output parameters can be defined as constraints).
See the Nonlinear Programming by Quadratic Lagrangian (NLPQL) (p. 233) theory section for more in-
formation.

Note

In some cases, particularly for Direct Optimization problems and simulations with a great
deal of noise, the Epsilon for the NLPQL optimization method is not large enough to get
above simulation noise. In these cases, it is recommended that you try the Adaptive Single-
Objective (ASO) optimization method instead.

To generate samples and perform a NLPQL optimization:

1. In the Optimization Outline view, select the Optimization node.

2. Enter the following input properties in the Properties view:

Method Name: Set to NLPQL.

Allowable Convergence Percentage: The tolerance to which the Karush-Kuhn-Tucker (KKT) optimality
criterion is generated during the NLPQL process. A smaller value indicates more convergence iterations
and a more accurate (but slower) solution. A larger value indicates less convergence iterations and a
less accurate (but faster) solution. The typical default is 1.0e-06, which is consistent across all problem
types since the inputs, outputs, and the gradients are scaled during the NLPQL solution.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 135
Using Goal Driven Optimization

Maximum Number of Candidates: Defaults to 3. The maximum possible number of candidates to be


generated by the algorithm. For details, see Viewing and Editing Candidate Points in the Table
View (p. 158).

Derivative Approximation: When analytical derivatives are not available, the NLPQL method approx-
imates them numerically. This property allows you to specify the method of approximating the gradient
of the objective function. Select one of the following options:

Central Difference: The central difference method is used to calculate output derivatives. This
method increases the accuracy of the gradient calculations, but doubles the number of design point
evaluations. Default method for pre-existing databases and new Response Surface Optimization
systems.

Forward Difference: The forward difference method is used to calculate output derivatives. This
method uses fewer design point evaluations, but decreases the accuracy of the gradient calculations.
Default method for new Direct Optimization systems.

Maximum Number of Iterations: The maximum possible number of iterations the algorithm ex-
ecutes. If convergence happens before this number is reached, the iterations will cease. This also
provides an idea of the maximum possible number of function evaluations that are needed for
the full cycle, as well as the maximum possible time it may take to run the optimization. For
NLPQL, the number of evaluations can be approximated according to the Derivative Approxim-
ation gradient calculation method, as follows:

For Central Difference: number of iterations * (2*number of inputs +1)

For Forward Difference: number of iterations * (number of inputs+1)

3. Specify optimization objectives and constraints.

In the Optimization tab Outline view, select the Objectives and Constraints node or an item under-
neath it.

In the Table or Properties view, define the Optimization Objectives. For details, see Defining Optim-
ization Objectives and Constraints (p. 153).

Note

For the NLPQL method, exactly one output parameter must have an objective defined.

4. Specify the optimization domain.

In the Optimization tab Outline view, select the Domain node or an input parameter or parameter
relationship underneath it.

In the Table or Properties view, define the selected domain object. For details, see Defining the Op-
timization Domain (p. 148).

5. Click the Update toolbar button or right-click the Optimization node in the Outline view and select
the Update option.

The result is a group of points or sample set. The points that are most in alignment with the ob-
jective are displayed in the table as the Candidate Points for the optimization. The result also displays

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
136 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal Driven Optimization Methods

the read-only field Size of Generated Sample Set in the Properties view. This value is always equal
to 1 for an NLPQL result.

6. Select the Optimization node in the Outline view and review convergence details in the Optimization
Status section of the Properties view. The following output properties are displayed:

Converged: Indicates whether the optimization has converged.

Number of Iterations: Number of iterations executed. Each iteration corresponds to one formulation
and solution of the quadratic programming sub-problem, or alternatively, one evaluation of gradients.

Number of Evaluations: Number of design point evaluations performed. This value takes all points
used in the optimization, including design points pulled from the cache. Can be used to measure the
efficiency of the optimization method to find the optimum design point.

Number of Failures: Number of failed design points for the optimization. When a design point fails,
an NLPQL optimization changes the direction of its search; it does not attempt to solve an additional
design point in its place and does not include it on the Direct Optimization Samples chart.

Size of Generated Sample Set: Number of samples generated in the sample set. This is the number
of iteration points obtained by the optimization and should be equal to the number of iterations.

Number of Candidates: Number of candidates obtained. (This value is limited by the Maximum
Number of Candidates input property.)

7. In the Outline view, you can select the Domain node or any object underneath it to review domain
details in the Properties and Table views.

8. For a Direct Optimization, you can select the Raw Optimization Data node in the Outline view to display
a listing of the design points that were calculated during the optimization in the Table view. If the raw
optimization data point exists in the Design Point Parameter Set, then the corresponding design point
name is indicated in parentheses under the Name column.

Note

Note that this list is compiled of raw data and does not show feasibility, ratings, etc. for
the included design points.

Performing an MISQP Optimization


The MISQP (Mixed-Integer Sequential Quadratic Programming) option can be used for both Response
Surface Optimization and Direct Optimization. It allows you to generate a new sample set to provide a
more refined approach than the Screening method. It is available for both continuous and discrete input
parameters, and can only handle one output parameter goal (other output parameters can be defined
as constraints).

To generate samples and perform a MISQP optimization:

1. In the Optimization Outline view, select the Optimization node.

2. Enter the following input properties in the Properties view:

Method Name: Set to MISQP.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 137
Using Goal Driven Optimization

Allowable Convergence Percentage: The tolerance to which the Karush-Kuhn-Tucker (KKT) optimality
criterion is generated during the MISQP process. A smaller value indicates more convergence iterations
and a more accurate (but slower) solution. A larger value indicates less convergence iterations and a
less accurate (but faster) solution. The typical default is 1.0e-06, which is consistent across all problem
types since the inputs, outputs, and the gradients are scaled during the MISQP solution.

Maximum Number of Candidates: Defaults to 3. The maximum possible number of candidates to be


generated by the algorithm. For details, see Viewing and Editing Candidate Points in the Table
View (p. 158).

Derivative Approximation: When analytical derivatives are not available, the MISQP method approx-
imates them numerically. This property allows you to specify the method of approximating the gradient
of the objective function. Select one of the following options:

Central Difference: The central difference method is used to calculate output derivatives. This
method increases the accuracy of the gradient calculations, but doubles the number of design point
evaluations. Default method for preexisting databases and new Goal Driven Optimization systems.

Forward Difference: The forward difference method is used to calculate output derivatives. This
method uses fewer design point evaluations, but decreases the accuracy of the gradient calculations.
Default method for new Direct Optimization systems.

Maximum Number of Iterations: The maximum possible number of iterations the algorithm ex-
ecutes. If convergence happens before this number is reached, the iterations will cease. This also
provides an idea of the maximum possible number of function evaluations that are needed for
the full cycle, as well as the maximum possible time it may take to run the optimization. For
MISQP, the number of evaluations can be approximated according to the Derivative Approxim-
ation gradient calculation method, as follows:

For Central Difference: number of iterations * (2*number of inputs +1)

For Forward Difference: number of iterations * (number of inputs+1)

Verify Candidate Points: Select to verify candidate points automatically at end of the Optimization
update.

3. Specify optimization objectives and constraints.

In the Optimization tab Outline view, select the Objectives and Constraints node or an item under-
neath it.

In the Table or Properties view, define the Optimization Objectives. For details, see Defining Optim-
ization Objectives and Constraints (p. 153).

Note

For the MISQP method, exactly one output parameter must have an objective defined.

4. Specify the optimization domain.

In the Optimization tab Outline view, select the Domain node or an input parameter underneath it.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
138 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal Driven Optimization Methods

In the Table or Properties view, define the Optimization Domain. For details, see Defining the Op-
timization Domain (p. 148).

5. Click the Update toolbar button or right-click the Optimization node in the Outline view and select
the Update option.

The result is a group of points or sample set. The points that are most in alignment with the ob-
jective are displayed in the table as the Candidate Points for the optimization. The result also displays
the read-only field Size of Generated Sample Set in the Properties view. This value is always equal
to 1 for an MISQP result.

6. Select the Optimization node in the Outline view and review convergence details in the Optimization
Status section of the Properties view. The following output properties are displayed:

Converged: Indicates whether the optimization has converged.

Number of Iterations: Number of iterations executed. Each iteration corresponds to one formulation
and solution of the quadratic programming sub-problem, or alternatively, one evaluation of gradients.

Number of Evaluations: Number of design point evaluations performed. This value takes all points
used in the optimization, including design points pulled from the cache. Can be used to measure the
efficiency of the optimization method to find the optimum design point.

Number of Failures: Number of failed design points for the optimization. When a design point fails,
an MISQP optimization changes the direction of its search; it does not attempt to solve an additional
design point in its place and does not include it on the Direct Optimization Samples chart.

Size of Generated Sample Set: Number of samples generated in the sample set. This is the number
of design points updated in the last iteration.

Number of Candidates: Number of candidates obtained. (This value is limited by the Maximum
Number of Candidates input property.)

7. In the Outline view, you can select the Domain node or any object underneath it to review domain
details in the Properties and Table views.

8. For a Direct Optimization, you can select the Raw Optimization Data node in the Outline view to display
a listing of the design points that were calculated during the optimization in the Table view. If the raw
optimization data point exists in the Design Point Parameter Set, then the corresponding design point
name is indicated in parentheses under the Name column.

Note

Note that this list is compiled of raw data and does not show feasibility, ratings, etc. for
the included design points.

Performing an Adaptive Single-Objective Optimization


The Adaptive Single-Objective (OSF + Kriging + MISQP with domain reduction) option can be used
only for Direct Optimization systems. This gradient-based method employs automatic intelligent refine-
ment to provide the global optima. It requires a minimum number of design points to build the Kriging
response surface, but in general, reduces the number of design points necessary for the optimization.
Failed design points are treated as inequality constraints, making it fault-tolerant.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 139
Using Goal Driven Optimization

The Adaptive Single-Objective method is available for input parameters that are continuous and con-
tinuous with Manufacturable Values. It can handle only one output parameter goal (other output
parameters can be defined as constraints). It does not support the use of parameter relationships in
the optimization domain. See the Adaptive Single-Objective Optimization (ASO) (p. 241) theory section
for details.

Note

To display advanced options, go to Tools > Options > Design Exploration > Show Advanced
Options.

To generate samples and perform an Adaptive Single-Objective optimization:

1. In the Optimization Outline view, select the Optimization node.

2. Enter the following input properties in the Optimization section of the Properties view:

Method Name: Set to Adaptive Single-Objective.

Number of Initial Samples: The number of samples generated for the initial Kriging and after all domain
reductions for the construction of the next Kriging.

You can enter a minimum of (NbInp+1)*(NbInp+2)/2 (also the minimum number of OSF samples
required for the Kriging construction) or a maximum of 10,000. Defaults to (NbInp+1)*(NbInp+2)/2
for Direct Optimization (there is no default for Response Surface Optimization).

Because of the Adaptive Single-Objective workflow (in which a new OSF sample set is generated
after each domain reduction), increasing the number of OSF samples does not necessarily improve
the quality of the results and significantly increases the number of evaluations.

Random Generator Seed: Advanced option allowing you to specify the value used to initialize the
random number generator invoked internally by the OSF algorithm. The value must be a positive integer.
This property allows you to generate different samplings (by changing the value) or to regenerate the
same sampling (by keeping the same value). Defaults to 1.

Maximum Number of Cycles: Determines the number of optimization loops the algorithm needs,
which in turns determines the discrepancy of the OSF. The optimization is essentially combinatorial,
so a large number of cycles will slow down the process; however, this will make the discrepancy of
the OSF smaller. The value must be greater than 0. For practical purposes, 10 cycles is usually good
for up to 20 variables. Defaults to 10.

Number of Screening Samples: The number of samples for the screening generation on the current
Kriging. This value is used to create the next Kriging (based on error prediction) and verified candidates.

You can enter a minimum of (NbInp+1)*(NbInp+2)/2 (also the minimum number of OSF samples
required for the Kriging construction) or a maximum of 10,000. Defaults to 100*NbInp for Direct
Optimization (there is no default for Response Surface Optimization).

The larger the screening sample set, the better the chances of finding good verified points.
However, too many points can result in a divergence of the Kriging.

Number of Starting Points: Determines the number of local optima to be explored; the larger the
starting points set, the more local optima will be explored. In the case of a linear surface, for example,

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
140 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal Driven Optimization Methods

it is not necessary to use many points. This value must be less than the Number of Screening samples
because they are selected in this sample. Defaults to the Number of Initial Samples.

Maximum Number of Evaluations: Stop criterion. The maximum possible number of evaluations
(design points) to be calculated by the algorithm. If convergence occurs before this number is reached,
evaluations will cease. This value also provides an idea of the maximum possible time it takes to run
the optimization. Defaults to 20*(NbInp +1).

Maximum Number of Domain Reductions: Stop criterion. The maximum possible number of domain
reductions for input variation. (No information is known about the size of the reduction beforehand).
Defaults to 20.

Percentage of Domain Reductions: Stop criterion. The minimum size of the current domain according
to the initial domain. For example, with one input ranging between 0 and 100, the domain size is
equal to 100. The percentage of domain reduction is 1%, so current working domain size cannot be
less than 1 (such as an input ranging between 5 and 6). Defaults to 0.1.

Convergence Tolerance: Stop criterion. The minimum allowable gap between the values of two suc-
cessive candidates. If the difference between two successive candidates is smaller than the Convergence
Tolerance value, the algorithm is stopped. A smaller value indicates more convergence iterations and
a more accurate (but slower) solution. A larger value indicates fewer convergence iterations and a less
accurate (but faster) solution. Defaults to 1E-06.

Retained Domain per Iteration (%): Advanced option that allows you to specify the minimum per-
centage of the domain you want to keep after a domain reduction. The value must be between 10%
and 90%. A larger value indicates less domain reduction, which implies better exploration but a slower
solution. A smaller value indicates a faster (and more accurate) solution, with the risk of it being a
local one. Defaults to 40 %.

Maximum Number of Candidates: Defaults to 3. The maximum possible number of candidates to be


generated by the algorithm. For details, see Viewing and Editing Candidate Points in the Table
View (p. 158).

3. Specify optimization objectives and constraints.

In the Optimization tab Outline view, select the Objectives and Constraints node or an item under-
neath it.

In the Table or Properties view, define the Optimization Objectives. For details, see Defining Optim-
ization Objectives and Constraints (p. 153).

Note

For the Adaptive Single-Objective method, exactly one output parameter must have an
objective defined.

After you have defined an objective, a warning icon in the Message column of the Outline
view means that you have exceeded the recommended number of input parameters. For
more information, see Number of Input Parameters for DOE Types (p. 69).

4. Specify the optimization domain.

In the Optimization tab Outline view, select the Domain node or an input parameter underneath it.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 141
Using Goal Driven Optimization

In the Table or Properties view, define the Optimization Domain. For details, see Defining the Op-
timization Domain (p. 148).

5. Click the Update toolbar button or right-click the Optimization node in the Outline view and select
the Update context option.

The result is a group of points or sample set. The points that are most in alignment with the ob-
jectives are displayed in the table as the Candidate Points for the optimization. The result also
displays the read-only field Size of Generated Sample Set in the Properties view.

6. Select the Optimization node in the Outline view and review convergence details in the Optimization
Status section of the Properties view. The following output properties are displayed:

Converged: Indicates whether the optimization has converged.

Number of Evaluations: Number of design point evaluations performed. This value takes into account
all points used in the optimization, including design points pulled from the cache. It corresponds to
the total of LHS points and verification points.

Number of Domain Reductions: Number of times the domain is reduced.

Number of Failures: Number of failed design points for the optimization. When a design point fails,
an Adaptive Single-Object optimization changes the direction of its search; it does not attempt to
solve an additional design point in its place and does not include it on the Direct Optimization Samples
chart.

Size of Generated Sample Set: Number of samples generated in the sample set. This is the number
of unique design points successfully updated.

Number of Candidates: Number of candidates obtained. (This value is limited by the Maximum
Number of Candidates input property.)

7. For a Direct Optimization, you can select the Raw Optimization Data node in the Outline view to display
a listing of the design points that were calculated during the optimization in the Table view. If the raw
optimization data point exists in the Design Point Parameter Set, then the corresponding design point
name is indicated in parentheses under the Name column.

Note

Note that this list is compiled of raw data and does not show feasibility, ratings, etc. for
the included design points.

Performing an Adaptive Multiple-Objective Optimization


The Adaptive Multiple-Objective (Kriging + MOGA) option can be used only for Direct Optimization
systems. It is an iterative algorithm that allows you to either generate a new sample set or use an existing
set, providing a more refined approach than the Screening method. It uses the same general approach
as MOGA, but applies the Kriging error predictor to reduce the number of evaluations needed to find
the global optimum.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
142 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal Driven Optimization Methods

The Adaptive Multiple-Objective method is only available for continuous input parameters and continuous
input parameters with Manufacturable Values. It can handle multiple objectives and multiple constraints.
For details, see Adaptive Multiple-Objective Optimization (AMO) (p. 251).

Note

To display advanced options, go to Tools > Options > Design Exploration > Show Advanced
Options.

To generate samples and perform an Adaptive Multiple-Objective optimization:

1. In the Optimization Outline view, select the Optimization node.

2. Enter the following input properties in the Optimization section of the Properties view:

Method Name: Set to Adaptive Multiple-Objective.

Type of Initial Sampling: If you do not have any parameter relationships defined, specify
Screening or OSF. Defaults to Screening. If you do have parameter relationships defined, the
initial sampling must be performed by the constrained sampling algorithms (because parameter
relationships constrain the sampling) and this property is automatically set to Constrained
Sampling.

Random Generator Seed: Advanced option that displays only when Type of Initial Sampling is
set to OSF. Specify the value used to initialize the random number generator invoked internally
by the OSF algorithm. The value must be a positive integer. This property allows you to generate
different samplings (by changing the value) or to regenerate the same sampling (by keeping the
same value). Defaults to 1.

Maximum Number of Cycles: Determines the number of optimization loops the algorithm needs,
which in turns determines the discrepancy of the OSF. The optimization is essentially combinator-
ial, so a large number of cycles will slow down the process; however, this will make the discrepancy
of the OSF smaller. The value must be greater than 0. For practical purposes, 10 cycles is usually
good for up to 20 variables. Defaults to 10.

Number of Initial Samples: Specify the initial number of samples to be used. The minimum re-
commended number of initial samples is 10 times the number of input parameters; the larger the
initial sample set, the better your chances of finding the input parameter space that contains the
best solutions.

Must be greater than or equal to the number of enabled input and output parameters; the
number of enabled parameters is also the minimum number of samples required to generate
the Sensitivities chart. You can enter a minimum of 2 and a maximum of 10,000. Defaults
to 100.

If you are switching from the Screening method to the MOGA method, MOGA generates a
new sample set. For the sake of consistency, enter the same number of initial samples as
used for the Screening optimization.

Number of Samples Per Iteration: Defaults to 100. The number of samples that are iterated and
updated at each iteration. This setting must be less than or equal to the number of initial samples.

Maximum Allowable Pareto Percentage: Convergence criterion. A percentage which represents


the ratio of the number of desired Pareto points to the Number of Samples Per Iteration. When

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 143
Using Goal Driven Optimization

this percentage is reached, the optimization is converged. For example, a value of 70% with
Number of Samples Per Iteration set at 200 samples would mean that the optimization should
stop once the resulting front of the MOGA optimization contains at least 140 points. Of course,
the optimization stops before that if the Maximum Number of Iterations is reached.

If the Maximum Allowable Pareto Percentage is too low (below 30) then the process may
converge prematurely, and if the value is too high (above 80) it may converge slowly. The
value of this property depends on the number of parameters and the nature of the design
space itself. Defaults to 70. Using a value between 55 and 75 works best for most problems.
For details, see Convergence Criteria in MOGA-Based Multi-Objective Optimization (p. 245).

Convergence Stability Percentage: Convergence criterion. A percentage which represents the


stability of the population, based on its mean and standard deviation. Allows you to minimize
the number of iterations performed while still reaching the desired level of stability. When this
percentage is reached, the optimization is converged. Defaults to 2%. To not take the convergence
stability into account, set to 0%. For details, see Convergence Criteria in MOGA-Based Multi-Ob-
jective Optimization (p. 245).

Maximum Number of Iterations: Stop criterion. The maximum possible number of iterations the
algorithm executes. If this number is reached without the optimization having reached convergence,
iteration will stop. This also provides an idea of the maximum possible number of function evalu-
ations that are needed for the full cycle, as well as the maximum possible time it may take to run
the optimization. For example, the absolute maximum number of evaluations is given by: Number
of Initial Samples + Number of Samples Per Iteration * (Maximum Number of Iterations - 1).

Mutation Probability: Advanced option allowing you to specify the probability of applying a
mutation on a design configuration. The value must be between 0 and 1. A larger value indicates
a more random algorithm; if the value is 1, the algorithm becomes a pure random search. A low
probability of mutation (<0.2) is recommended. Defaults to 0.01. For more information on mutation,
see MOGA Steps to Generate a New Population (p. 248).

Crossover Probability: Advanced option allowing you to specify the probability with which parent
solutions are recombined to generate the offspring solutions. The value must be between 0 and
1. A smaller value indicates a more stable population and a faster (but less accurate) solution; if
the value is 0, then the parents are copied directly to the new population. A high probability of
crossover (>0.9) is recommended. Defaults to 0.98.

Type of Discrete Crossover: Advanced option allowing you to determine the kind of crossover
for discrete parameters. Three crossover types are available: One Point, Two Points, or Uniform.
According to the type of crossover selected, the children will be closer to or farther from their
parents (closer for One Point and farther for Uniform). Defaults to One Point. For more information
on crossover, see MOGA Steps to Generate a New Population (p. 248).

Maximum Number of Candidates: Defaults to 3. The maximum possible number of candidates


to be generated by the algorithm. For details, see Viewing and Editing Candidate Points in the
Table View (p. 158).

3. Specify optimization objectives and constraints.

In the Optimization tab Outline view, select the Objectives and Constraints node or an item
underneath it.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
144 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal Driven Optimization Methods

In the Table or Properties view, define the Optimization Objectives. For details, see Defining
Optimization Objectives and Constraints (p. 153).

Note

For the Adaptive Multiple-Objective method, at least one output parameter must
have an objective defined. Multiple objectives are allowed.

After you have defined an objective, a warning icon in the Message column of the
Outline view means that you have exceeded the recommended number of input
parameters. For more information, see Number of Input Parameters for DOE
Types (p. 69).

4. Specify the optimization domain.

In the Optimization tab Outline view, select the Domain node or an input parameter or parameter
relationship underneath it.

In the Table or Properties view, define the selected domain object. For details, see Defining the
Optimization Domain (p. 148).

5. Click the Update toolbar button or right-click the Optimization node in the Outline view and select
the Update context option.

The result is a group of points or sample set. The points that are most in alignment with the
objectives are displayed in the table as the Candidate Points for the optimization. The result
also displays the read-only field Size of Generated Sample Set in the Properties view.

6. Select the Optimization node in the Outline view and review convergence details in the Optimiz-
ation Status section of the Properties view. The following output properties are displayed:

Converged: Indicates whether the optimization has converged.

Number of Evaluations: Number of design point evaluations performed. This value takes all
points used in the optimization, including design points pulled from the cache. Can be used to
measure the efficiency of the optimization method to find the optimum design point.

Number of Failures: Number of failed design points for the optimization. When a design point
fails, an Adaptive Multiple-Objective optimization does not retain this point in the Pareto front
to generate the next population, attempt to solve an additional design point in its place, or include
it on the Direct Optimization Samples chart.

Size of Generated Sample Set: Number of samples generated in the sample set. This is the
number of samples successfully updated for the last population generated by the algorithm;
usually equals the Number of Samples Per Iteration.

Number of Candidates: Number of candidates obtained. (This value is limited by the Maximum
Number of Candidates input property.)

7. In the Outline view, you can select the Domain node or any object underneath it to review domain
details in the Properties and Table views.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 145
Using Goal Driven Optimization

8. For a Direct Optimization, you can select the Raw Optimization Data node in the Outline view to
display a listing of the design points that were calculated during the optimization in the Table view.
If the raw optimization data point exists in the Design Point Parameter Set, then the corresponding
design point name is indicated in parentheses under the Name column.

Note

Note that this list is compiled of raw data and does not show feasibility, ratings, etc.
for the included design points.

Performing an Optimization with an External Optimizer


In addition to using the optimization algorithms provided by DesignXplorer, you can also use external
optimizers within the DesignXplorer environment. You can install and load optimization extensions,
integrating the features of an external optimizer into your design exploration workflow. An optimization
extension specifies the properties, objectives and constraints, and functionality for one or more optimizers.

Once an extension is installed, it is displayed in the Extension Manager and can be loaded to the
project. Once the extension is loaded to the project, the optimizers defined by the extension are displayed
as options in the Method Name property drop-down.

You can also specify whether extensions are saved to your project. For more detailed information on
saving extensions, see the Extensions section in the Workbench User's Guide.

Note

For information on the creation of optimization extensions, see the Application Customization
Toolkit Developers Guide and the Application Customization Toolkit Reference Guide. These
documents are part of the ANSYS Customization Suite on the ANSYS Customer Portal. To
access documentation files on the ANSYS Customer Portal, go to http://support.ansys.com/
documentation.

Related Topics:
Locating and Downloading Available Extensions
Installing an Optimization Extension
Loading an Optimization Extension
Selecting an External Optimizer
Setting Up an External Optimizer Project
Performing the Optimization

Locating and Downloading Available Extensions


You can use custom optimization extensions that are developed by your company. You can also
download existing extensions from the ANSYS Extension Library (on the Downloads page of the ANSYS
Customer Portal). Optimization extensions for DesignXplorer are located in the ACT Library. To access
documentation files on the ANSYS Customer Portal, go to http://support.ansys.com/documentation.

Installing an Optimization Extension


To an optimization extension, you must first install it to make it available to ANSYS Workbench and
DesignXplorer.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
146 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal Driven Optimization Methods

To install an extension:

1. In the Project Schematic, select the Extensions > Install Extension menu option.

2. Browse to the location of the extension you wish to install.

In most cases, the extension you select will be a binary extension in which the optimizer has
already been defined and compiled into the .wbex file format. (Once the extension is compiled,
the optimizer definition can no longer be changed.)

3. Select the extension and click Open.

The extension is installed in your App Data directory. Once installed, it is shown in the Exten-
sions Manager and remains available to be loaded to your project(s).

Loading an Optimization Extension


Once the extension has been installed, it becomes available to be loaded to your ANSYS Workbench
project.

Note

The extension must be loaded separately to each project unless you have specified it as a
default extension via the Workbench Tools > Options dialog. For more detailed information,
see the Extensions section in the Workbench User's Guide.

To load an extension to your project:

1. In the Project Schematic, select the Extensions > Manage Extensions menu option.

2. The Extensions Manager dialog shows all the installed extensions. Select the check boxes for the
extension to be loaded to the project and close the dialog.

3. The extension should now be loaded to the project, which means that it is available to be selected
as an optimization method. You can select Extensions > View Log File to verify that the extension
loaded successfully.

Selecting an External Optimizer


Once an extension has been loaded to your project, the external optimizers defined in it are available
to be selected and used as an optimization method. In the Optimization tab Properties view, external
optimizers are included as options in the Method Name drop down, along with the optimization
methods provided by DesignXplorer. Once an optimizer is selected, you can edit the optimization
properties.

DesignXplorer filters the Method Name list for applicability to the current projecti.e., it displays only
those optimization methods that can be used to solve the optimization problem as it is currently defined.
When no objectives or constraints are defined for a project, all optimization methods are displayed in
the Method Name drop down. If you already know that you want to use a particular external optimizer,
it is recommended that you select it as the method before setting up the rest of the project. Otherwise,
the optimization method could be inadvertently filtered from the list.

In some cases, a necessary extension is not available when a project is reopened. This can happen when
the extension is unloaded or when the extension used to solve the project was not saved to the project

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 147
Using Goal Driven Optimization

upon exit. When this happens, the Method Name drop down displays an alternate label, <extension-
name>@<optimizername>, instead of the external optimizer name. You can still do post-processing
with the project (because the optimizer has already solved the problem), but the properties are read-
only and you cannot use the optimizer to perform further calculations. When you select a different
optimization method, the alternate label will disappear from the list of options.

Setting Up an External Optimizer Project


The DesignXplorer interface shows only the optimization functionality that is specifically defined in the
extension. Additionally, it filters objectives and constraints according to the optimization method selec-
tedi.e., only those objects supported by the selected optimization method are available for selection.
For example, if you have selected an optimizer that does not support the Maximize objective type,
Maximize will not be displayed as an option in the Objective Type drop down menu.

If you already have a specific problem you want to solve, it is recommended that you set up the project
before selecting an optimization method from the Method Name drop down. Otherwise, the desired
objectives and constraints could be filtered from the user interface.

Performing the Optimization


Once the external optimizer is selected and the optimization study is defined, the process of running
the optimization is the same as for any of the native optimization algorithms. DesignXplorers interme-
diate functionality (such as iterative updates of the History charts and sparklines) and post-processing
functionality (such as candidate points and other optimization charts) are still available.

Defining the Optimization Domain


When you edit the Optimization component of a Goal Driven Optimization system, there are multiple
ways of defining the parameter space for each input parameter. The optimization domain settings allow
you to reduce the range of variation for the input so that samples are generated only in the reduced
space (i.e., within the defined bounds), ensuring that only samples that can be used for your optimization
are created. You can do this for two types of domain objects: input parameters and/or parameter rela-
tionships. Both types are under the Domain node in the Outline view.

When you select the Domain node or any object underneath it, the Table view shows the input para-
meters and parameter relationships that are defined and enabled for the optimization. (Disabled domain
objects do not display in the table.)

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
148 of ANSYS, Inc. and its subsidiaries and affiliates.
Defining the Optimization Domain

Defining Input Parameters


Under the Domain node in the Outline view, you can enable or disable the input parameters for the
parameter space by toggling the check box under the Enabled column for the corresponding input
parameter. When the input parameter is enabled, you can edit the bounds and, where applicable, the
starting value by either of the following methods:

In the Outline view, select the Domain node. Edit the input parameter domain via the Table view.

In the Outline view, select an input parameter under the Domain node. Edit the input parameter domain
via either the Properties or Table view.

The following editable settings are available for enabled input parameters in the Properties and Table
views:

Lower Bound
Allows you to define the Lower Bound for the optimization input parameter space; increasing the
defined lower bound would confine the optimization to a subset of the DOE domain. By default, corres-
ponds to the following values defined in the DOE:

the Lower Bound (for continuous parameters)

the lowest discrete Level (for discrete parameters)

the lowest manufacturable Level (for continuous parameters with Manufacturable Values)

Upper Bound
Allows you to define the Upper Bound for the input parameter space. By default, corresponds to the
following values defined in the DOE:

the Upper Bound (for continuous parameters)

the highest discrete Level (for discrete parameters)

the highest manufacturable Level (for continuous parameters with Manufacturable Values)

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 149
Using Goal Driven Optimization

Starting Value
Available only for the NLPQL and MISQP optimization methods. Allows you to specify where the optim-
ization starts for each input parameter.

Because NLPQL and MISQP are gradient-based methods, the starting point in the parameter space
determines the candidates found; with a poor starting point, NLPQL and MISQP could find a local
optimum, which is not necessarily the same as the global optimum. This setting gives give you
more control over your optimization results by allowing you specify exactly where in the parameter
space the optimization should begin.

Must fall between the Lower Bound and Upper Bound.

Must fall within the domain constrained by the enabled parameter relationships. For more informa-
tion, see Defining Parameter Relationships (p. 150).

For each disabled input parameter, specify the desired value to use in the optimization. By default,
the value was copied from the current design point when the Optimization system was created.

Note

When the optimization is refreshed, the disabled input values will be persisted; they
will not be updated to the current design point values.

Defining Parameter Relationships


Parameter relationships give you greater flexibility in defining optimization limits on input parameters
than the standard single-parameter constraints such as Greater Than, Less Than, and Inside Bounds.
When you define parameter relationships, you can specify expression-based relationships between
multiple input parameters, with the values remaining physically bounded and reflecting the constraints
on the optimization problem.

Note

To specify parameter relationships for outputs, you can create derived parameters. You can
create derived parameters for an analysis system by providing expressions. The derived
parameters are then passed to DesignXplorer as outputs. For more information, see Custom
Parameters

A parameter relationship is comprised of one operator and two expressions. In the example below, you
can see that two parameter relationships have been defined, each involving input parameters P1 and
P2.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
150 of ANSYS, Inc. and its subsidiaries and affiliates.
Defining the Optimization Domain

This section discusses how to create, edit, enable/disable, and view parameter relationships.

Creating Parameter Relationships


You can create new domain parameter relationships by the following methods:

In the Outline view, right-click the Parameter Relationships node and select Insert Parameter Relation-
ship from the context menu.

In the Parameter Relationships section of the Table view, type parameter relationship details into the
bottom row.

Editing Parameter Relationships


Once a parameter relationship has been created, you can edit it by the following methods:

In the Outline view, select either the Domain or the Parameter Relationships node underneath it. Edit
the parameter relationship via the Table view.

In the Outline view, select a parameter relationship under the Parameter Relationships node. Edit the
parameter relationship via the Properties or Table view.

The table below shows editing tasks the Properties, Table, and Outline views allow you to perform.

Task Properties Table Outline


Rename a relationship by double-clicking its X X X
name and typing in a custom name
Edit expressions by typing into the expression X X
cell
Change operators by selecting from the drop- X X
down menu
Duplicate a relationship by right-clicking it and X X X
selecting Duplicate
Delete a relationship by right-clicking it and X X X
selecting Delete

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 151
Using Goal Driven Optimization

Enabling/Disabling Parameter Relationships


You can enable or disable a parameter relationship by selecting/deselecting the Enabled check box.
Enabling a parameter relationship will cause it to be displayed on the corresponding History chart and
sparkline. For more information, see Using the History Chart (p. 165).

Parameter Relationship Properties


The following properties are available for parameter relationships:

Name
Editable in the Properties and Table views. Each parameter relationship is given a default Name, such
as Parameter or Parameter 2, based on the order in which the parameter relationship was created.
When you define both the left and right expressions for the parameter relationship, the default name
is replaced by the relationship. For example, Parameter 2 may become P1<=P2. The name will be
updated accordingly when either of the expressions are modified.

The Name property allows you to edit the name of the parameter relationship. Once you edit the
Name, the name persists. To resume the automated naming system, just delete the custom name
and leave the property empty.

Left Expression/Right Expression


Editable in the Properties and Table views. Allows you to define the parameter relationship expressions.

Left Expression Quantity Name/ Right Expression Quantity Name


Viewable in the Properties view. Shows the quantity type for the expressions in the parameter relationship.

Operator
Editable in the Properties and Table views. Allows you to select the expression operator from a drop-
down menu. Available values are <= and >=.

Viewing Current Expression Values


For optimization methods that have a Starting Value (NLPQL and MISQP), each expression for a para-
meter relationship is evaluated based on the starting parameter values.

When the evaluation is complete, the value for each expression is displayed:

In the Domain node Table view. To view expression values for a parameter relationship, click the plus
icon next to its name. The values display beneath the corresponding expression.

In the Candidate Points node Table view. In the Properties view, select the Show Parameter Relation-
ships check box. Parameter relationships that are defined and enabled, along with their expressions (and
current expression values, for NLPQL and MISQP), are shown in the Candidate Points table. See Viewing
and Editing Candidate Points in the Table View (p. 158).

If the evaluation fails, an error message is displayed in the Outline and Properties views. Review the
error to identify problems with the corresponding parameter relationship.

Note

The evaluation may fail because the selected optimization method does not support
parameter relationships and or because the optimization includes one or more invalid

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
152 of ANSYS, Inc. and its subsidiaries and affiliates.
Defining Optimization Objectives and Constraints

parameter relationships. Parameter relationships may be invalid if they contain quantities


that are not comparable, parameters for which the values are unknown, or expressions
that are incorrect.

The gradient-based optimization methods, NLPQL and MISQP, require a feasible Starting
Value that falls within the domain constrained by the enabled parameter relationships. If
the Starting Value is infeasible, the optimization cannot be updated.

Defining Optimization Objectives and Constraints


When you edit the Optimization cell of a Goal Driven Optimization system and select the Objectives
and Constraints node in the tab Outline view, the Optimization Objectives section in the Table
view allows you to define design goals in the form of objectives and constraints that will be used to
generate optimized designs. Additional optimization properties can be set in the Properties view. The
optimization approach used in the design exploration environment departs in many ways from tradi-
tional optimization techniques, giving you added flexibility in obtaining the desired design configuration.

Note

If you are using an external optimizer, DesignXplorer filters objectives and constraints accord-
ing to the optimization method selected (i.e., only the objective types and constraint types
supported by the selected optimization method are available for selection). For example, if
you have selected an optimizer that does not support the Maximize objective type, Maximize
will not be displayed as an option in the Objective Type drop down menu.

The following optimization settings are available and can be edited in both the Table view and Prop-
erties view.

Objective Name
Allows you to define an objective for each parameter. If desired, you can also enter a name for the ob-
jective. Each objective is given a default name based on its defined properties and which is updated
each time the objective is modified. For example, when an objective is defined to Minimize parameter
P1, the objective name is set to "Minimize P1. If the objective type is changed to Maximize, the ob-
jective name becomes "Maximize P1. Then, if a constraint type of Values >= Bound is added and
Lower Bound is set to 3, the objective name becomes Maximize P1; P1 >= 3. The name is displayed
under the Objectives and Constraints node of the Outline view.

The Objective Name also allows you to edit the name of the objective. in the Optimization tab,
you can edit the name of an objective in the Outline view. Additionally, you can select the Objectives
and Constraints node of the Outline view and edit the name in Table view, or select the objective
itself in the Outline view and edit the name in either the either the Properties or the Table view.
Once you edit the Objective Name, the name persists and is no longer changed by modifications
to the objective. To resume the automated naming system, just delete the custom name and leave
the property empty.

Available options vary according to the type of parameter and whether it is an input or output
parameter.

Parameter
Allows you to select the parameter (either input or output) for which the objective is being defined.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 153
Using Goal Driven Optimization

Objective Type
Available for both continuous input parameters and output parameters. Allows you to define an objective
by specifying the Objective Type (see the tables below for available objective types).

Constraint Type
Available for output parameters, discrete input parameters, and continuous input parameters with
Manufacturable Values. Allows you to define a constraint by specifying the Constraint Type (see the
tables below for available constraint types).

Objective Target
For a parameter that has an Objective Type of Seek Target, allows you to specify the target value for
the objective

Constraint Lower Bound/Constraint Upper Bound


For a parameter with a Constraint Type other than Lower Bound <= Values <= Upper Bound, set
Lower Bound to the target value for the constraint.

For a parameter with a Constraint Type of Lower Bound <= Values <= Upper Bound, set both
the Lower Bound and the Upper Bound to define an acceptable range for the output. The second
value must be greater than the first. Available only for continuous output parameters.

Objectives for Continuous Input Parameters

Objectives Description Mathematical Meaning


(X = an input parameter, XLower = lower limit, XUpper = Upper Limit)
No Objective Keep the input parameter within its upper XLower <= X <= XUpper
and lower bounds.
Minimize Minimize the input parameter within its X XLower
range.
Maximize Maximize the input parameter within its X XUpper
range.
Seek Target Achieve an input parameter value that X XTarget
is close to the objective Target.

Objectives for Output Parameters

Objective Description Mathematical Meaning GDO Treat-


ment
(Y = output parameter, YTarget = Target)
No Objective No objective is specified. N/A N/A
Minimize Achieve the lowest possible value for Minimize Y Objective
the output parameter.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
154 of ANSYS, Inc. and its subsidiaries and affiliates.
Defining Optimization Objectives and Constraints

Objective Description Mathematical Meaning GDO Treat-


ment
Maximize Achieve the highest possible value for Maximize Y Objective
the output parameter.
Seek Target Achieve an output parameter value Y YTarget Objective
that is close to the objective Tar-
get.

Constraints for Discrete Input Parameters or Continuous Input Parameters with Manufacturable
Values

Objectives Description Mathematical Meaning


(X = an input parameter, XLower = Lower Bound, XUpper = Upper Bound, XTarget = Target)
No Constraint Uses the full discrete range allowed for the XLower <= X <= XUpper
parameter.
Values = Bound Set the constraint to in-range values X XLowerBound
close to the Lower Bound.
Values >= Lower Set the constraint to in-range values above X >= XLowerBound
Bound the Lower Bound.
Values <= Upper Set the constraint to in-range values below X <= XUpperBound
Bound the Upper Bound.

Constraints for Output Parameters

Objective Description Mathematical Meaning GDO Treat-


ment
(Y = output parameter, YTarget = Target)
No Con- No constraint is specified. N/A N/A
straint
Values = If a Lower Bound is specified, then Y YLowerBound Constraint
Bound achieve an output parameter value that
is close to that bound.
Values >= If a Lower Bound is specified, then Y > =YLowerBound Inequality con-
Lower Bound achieve an output parameter value that straint
is above the Lower Bound.
Values <= If an Upper Bound is specified, then Y <= YUpperBound Inequality con-
Upper Bound achieve an output parameter value that straint
is below the Upper Bound.
Lower If the Lower Bound and the Upper YLowerBound <= Y <= YUpper- Constraint
Bound Bound are specified, then achieve Bound
<= Val- an output parameter that is within
ues <= the defined range. given
Upper
Bound YLowerBound < YUpper-
Bound

The following post-processing properties are available and editable in the DSP (Decision Support Process)
section of the Properties view:

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 155
Using Goal Driven Optimization

Objective Importance/Constraint Importance


For any parameter with an Objective or Constraint defined, allows you to select the relative importance
of that parameter in regard to the other objectives. Available options on the drop-down are Default,
Higher, and Lower.

Constraint Lower Bound/Constraint Upper Bound


For each parameter that has a Constraint Type of Lower Bound <= Values <= Upper Bound, allows
you to enter a value for the Lower Bound and Upper Bound, defining an acceptable range for the
output. The second value must be greater than the first. Available only for continuous output parameters.

Constraint Handling
For any constrained parameter, allows you to specify the Constraint Handling for that parameter.

This option can be used for any optimization application and is best thought of as a "constraint
satisfaction" filter on samples generated from the optimization runs. This is especially useful for
Screening samples to detect the edges of solution feasibility for highly constrained nonlinear optim-
ization problems. The following choices are available:

Relaxed: Samples are generated in the full parameter space, with the constraint only being used to
identify the best candidates. When this option is selected, the upper, lower, and equality constrained
objectives of the candidate points shown in the Table of Candidate Points are treated as objectives;
thus any violation of the objective is still considered feasible.

Strict (default): Samples are generated in the reduced parameter space defined by the constraint.
When this option is selected, the upper, lower, and equality constraints are treated as hard constraints;
that is, if any of them are violated then the candidate is no longer displayed.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
156 of ANSYS, Inc. and its subsidiaries and affiliates.
Working with Candidate Points

Objectives, Constraints, and Optimization Methods


Objective and Constraint Requirements According to Optimization Method:

Screening uses any goals that are defined to color the samples in the Samples chart, but does not
have any requirements concerning the number of objectives or constraints defined.

MOGA and Adaptive Multiple-Objective require that an objective is defined for at least one output
parameter. This means that an Objective Type is selected for at least one of the output parameters
and an Objective Value is defined. Multiple output objectives are allowed.

NLPQL, MISQP, and Adaptive Single-Objective require that an objective is defined for one output
parameter. This means than an Objective Type is selected one of the output parameters and an
Objective Value is defined. Only a single output objective is allowed.

Non-Screening Optimization: When any of the methods other than Screening (i.e., MOGA, NLPQL,
MISQP, Adaptive Single-Objective, or Adaptive Multiple-Objective) are used for Goal Driven Optim-
ization, the search algorithm uses the Objective Type and target value and/or the Constraint Type
and target range of the output parameters and ignores the similar properties of the input parameters.
However, when the candidate points are reported, the first Pareto fronts (in case of MOGA and Adaptive
Multiple-Objective) or the best solution set (in case of NLPQL, MISQP, and Adaptive Single-Objective)
are filtered through a Decision Support Process that applies the parameter Objectives and Constraints
and reports the returned candidate points. The candidate points may not necessarily correspond to the
best input targets, but do provide the best designs with respect to the output properties.

Screening Optimization: The Screening option is not strictly an optimization approach; it uses the
properties of the input as well the output parameters, and uses the Decision Support Process on a
sample set generated by the Shifted Hammersley technique to rank and report the candidate points.
The candidate points correspond to both the input and output objectives specified by the user, although
the candidate points will not correspond to the best optimal solutions.

Working with Candidate Points


DesignXplorer provides you with multiple ways of viewing and working with candidate points, according
the node selected in the Optimization tab Outline view.

When you select the Optimization node, the Table view displays a summary of candidate data.

When you select the Candidate Points node, the Table view allows you to view existing candidate points
and add new custom candidate points. Results are also displayed graphically in the Chart view; for details,
see Using the Candidate Points Results (p. 173).

Once candidates are created, you can verify them and also have the option of inserting them into the
Response Surface as other types of points.

Related Topics:
Viewing and Editing Candidate Points in the Table View
Retrieving Intermediate Candidate Points
Creating New Design, Response, Refinement, or Verification Points from Candidates
Verify Candidates by Design Point Update

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 157
Using Goal Driven Optimization

Viewing and Editing Candidate Points in the Table View


When you select the Candidate Points node of the Outline view, the Table view displays candidate
point data on the existing candidate points that were generated by the optimization.

The maximum number of candidate points that can be generated by the optimization is determined
by the value you specify for the Maximum Number of Candidates property. The recommended max-
imum number of candidate points varies according to the optimization method being used. For example,
you generally need only one candidate for the gradient-based, single-objective algorithms (NLPQL and
MISQP), but you can request as many candidates as you want for multiple-objective algorithms (because
there are several potential candidates for each Pareto front that is generated).

Note

The number of candidate points does not affect the optimization, so you can experiment
by changing the value of the Maximum Number of Candidates property and then
updating the optimization; so long as this is the only property changed, the update only
performs post-processing operations and the candidates are generated immediately.

Each candidate point is displayed, along with its input and output values. Output parameter values
calculated from simulations (design point updates) are displayed in black text, while output parameter
values calculated from a response surface are displayed in the custom color defined in the Options
dialog (see Response Surface Options (p. 23)). The number of gold stars or red crosses displayed next
to each goal-driven parameter indicate how well the parameter meets the stated goal, from three red
crosses (the worst) to three gold stars (the best).

For each parameter with a goal defined, the table also calculates the percentage of variation for all
parameters with regard to an initial reference point. By default, the initial reference point for an NLPQL
or MISQP optimization is the Starting Point defined in the optimization properties. For a Screening or
MOGA optimization, the initial reference point is the most viable candidate, Candidate 1. You can set
any candidate point as the initial reference point by selecting the radio button in the Reference column.
The Parameter Value column displays the parameter value and stars indicating the quality of the
candidate. In the Variation from Reference column, green text indicates variation in the expected
direction and red text indicates variation that is not. When there is no obvious direction (as for a con-
straint), the percentage value is displayed in black text.

The Name of each candidate point indicates whether the candidate point corresponds to a design point
in the Parameter Set Table of Design Points (i.e. whether the candidate point and design point have
the same input parameter values). If the design point is deleted from the Parameter Set or the definition
of either point is changed, the link between the two points is broken (without invalidating your model
or results) and the indicator is removed from the candidate points name.

If parameter relationships are defined and enabled, you can opt to also view parameter relationships
in the Candidate Points Table view. To do so, select the Show Parameter Relationships check box in
the Properties view. The parameter relationship(s) and their expressions will be shown in the Candidate
Points table. For NLPQL and MISQP, the table will also show current values for each expression.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
158 of ANSYS, Inc. and its subsidiaries and affiliates.
Working with Candidate Points

Creating Custom Candidate Points


The Candidate Points Table view allows you to create custom candidate points. You can add candidate
points to represent the existing design of a product, the initial design of the parametric study, or other
points of interest. You can add a custom candidate point by any of the following methods:

Select Candidate Points under the Outline view Results node. In the Table view, you can enter data into
the cells of the bottom row of the table. For a Response Surface Optimization, you can also right-click a
candidate point row in the Table view and select Insert as Custom Candidate Point.

Select the Optimization node of the Outline view. For a Response Surface Optimization, you can also
right-click a candidate point row in the Table view and select Insert as Custom Candidate Point.

Select any chart under the Outline view Results node. Right-click a point in the chart and select Insert
as Custom Candidate Point.

When a custom candidate point is created in a Response Surface Optimization, the outputs of custom
candidates are automatically evaluated from the response surface. When a custom candidate is created
in a Direct Optimization, the outputs of the custom candidates are not brought up to date until the
next real solve.

Once created, the point is automatically plotted in the Candidate Points results that display in the Chart
view and can be treated as any other candidate point. You have the ability to edit the name, edit input
parameter values, and select options from the right-click context menu. In addition to the standard
context menu options, there is an Update Custom Candidate Point option is available for out-of-date
candidates in a Direct Optimization and a Delete option that allows you to delete a custom candidate
point.

Retrieving Intermediate Candidate Points


DesignXplorer allows you to monitor an optimization while it is still running by watching the progress
of the History chart. See for details. If the History chart indicates that candidate points meeting your
criteria have been found midway through the optimization, you can stop the optimization and retrieve
those results without having to run the rest of the optimization. For details, see Using the History
Chart (p. 165).

To stop the optimization, click the Show Progress button and click the Interrupt button to the right
of the progress bar. Select either Interrupt or Abort from the dialog that is displayed (intermediate
results will be available in either case).

When the optimization is stopped, candidate points are generated from the data available at that time,
such as solved samples, results of the current iteration, the current populations, etc.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 159
Using Goal Driven Optimization

To see the state of the optimization at the time it was stopped, view the optimization status and counts
in the Optimization node Properties view.

To view the intermediate candidate points, select Candidate Points under the Results node.

Note

DesignXplorer may not be able to return verified candidate points for optimizations that
have been stopped.

When an optimization is stopped midway through, the Optimization component remains in an unsolved
state. If you change any settings before updating the optimization again, the optimization process must
start over. However, if you do not change any settings before the next update, DesignXplorer makes
use of the design point cache to quickly return to the current iteration.

Creating New Design, Response, Refinement, or Verification Points from


Candidates
Once the objectives are stated and candidate points are generated, the candidate points are listed in
the Optimization table view. They can be stored as response points, design points, refinement points
or verification points.

You can create new points by right-clicking on a candidate point in the Table view (when either the
Optimization node or the Candidate Points node is selected in the Outline view) or right-clicking a
point in any of the optimization charts and selecting an option from the context menu. The options
available depend on the type of optimization. Possible options are:

Explore Response Surface at Point inserts new a response point into the Table of Response Surface
by copying the input and output parameter values of the selected candidate point(s).

Insert as Design Point creates a new design point at the project level (edit the Parameter Set in the
Schematic view to list the existing design points) by copying the input parameter values of the selected
candidate point(s). The output parameter values are not copied because they are approximated values
provided by the Response Surface.

Insert as Refinement Point inserts a new refinement point into the Table of Response Surface by
copying the input parameter values of the selected candidate point(s).

Insert as Verification Point inserts new a verification point into the Table of Response Surface by
copying the input and output parameter values of the selected candidate point(s).

Insert as Custom Candidate Point creates new custom candidate points into the Table of Candidate
Points by copying the input parameter values of the selected candidate point(s).

For a Response Surface Optimization, the above insertion operations are available for the Raw Optim-
ization Data table and most optimization charts, depending on the context. For instance, it is possible

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
160 of ANSYS, Inc. and its subsidiaries and affiliates.
Working with Candidate Points

to right-click on a point in a Tradeoff chart in order to insert the corresponding sample as a response
point or a refinement point or a design point. The same operation is available from a Samples chart.

Note

For a Direct Optimization, only the Insert as Design Point and Insert as Custom Candidate
Point context options are available.

Verify Candidates by Design Point Update


For a Response Surface Optimization, candidate points are verified automatically at the end of the op-
timization update if you select the Verify Candidate Points check box in the Properties view of the
Optimization tab. Candidate points can also be interactively verified after they are created by right-
clicking one or more candidate points in the Table view (when either the Optimization or the Candidate
Points node is selected in the Outline view) and selecting the Verify by Design Points Update option
from the context menu; this option is available for both optimization-generated and custom candidate
points.

DesignXplorer verifies candidate points by creating and updating design points with a "real solve," using
the input parameter values of the candidate points. The output parameter values for each candidate
point are displayed in a separate row. For a Response Surface Optimization, verified candidates are
placed next to the row containing the output values generated by the response surface; the sequence
varies according to sort order. Output parameter values calculated from simulations (design point updates)
are displayed in black text, while output parameter values calculated from a response surface are dis-
played in the custom color defined in the Options dialog (see Response Surface Options (p. 23)).

In a Response Surface Optimization, if there is a large difference between the results of the verified and
unverified rows for a point, it could indicate that the response surface is not accurate enough in that
area and perhaps refinement or other adjustments are necessary. In this case, it is possible to insert the
candidate point as a refinement point. You will then need to recompute the Goal Driven Optimization
so that the refinement point and the new response surface are taken into account.

Note

This option could be convenient when the update of a design point is fast enough.

Often the candidate points will not have the ideal input parameters, such as a thickness that
is 0.127 instead of the more practical 0.125. Many users prefer to use the right-click option to
turn this candidate into a design point, edit the design point input parameters, and then run
that design point instead of the candidate point.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 161
Using Goal Driven Optimization

To solve the verification points, DesignXplorer uses the same mechanism used to solve DOE points. The
verification points are either deleted or persisted after the run as determined by the usual DesignXplorer
Preserve Design Points after a DX Run option. As usual, if the update of the verification point fails,
it is preserved automatically in the project; you can explore it (as a design point) by editing the Para-
meter Set bar on the Project Schematic.

Goal Driven Optimization Charts and Results


After solving an Optimization cell, the following results or charts are created by default under the
Results node of the Outline view: the Candidate Points results, the Sensitivities chart, the Tradeoff
chart, and the Samples chart.

After the first solve of the Optimization cell:

The Convergence Criteria chart displays by default in the Chart view.

The History chart is available for the following objects in the Outline view:

Objectives and constraints

Enabled input parameters

Enabled Parameter relationships

With the exception of the Candidate Points results, the Convergence Criteria chart, and the History
chart, it is possible to duplicate charts. Select the chart from the Outline view and either select Duplicate
from the context menu or use the drag and drop mechanism. This operation will attempt an update of
the chart so that the duplication of an up-to-date chart results in an up-to-date chart.

Related Topics:
Using the Convergence Criteria Chart
Using the History Chart
Using the Candidate Points Results
Using the Sensitivities Chart (GDO)
Using the Tradeoff Chart
Using the Samples Chart

Using the Convergence Criteria Chart


Once the optimization has been updated for the first time, the Convergence Criteria chart allows you
to view the evolution of the convergence criteria for the selected iterative optimization method.

Note

The Convergence Criteria chart is not available for the Screening optimization method.

When an external optimizer is used, the Convergence Criteria chart will be generated if data
is available.

The Convergence Criteria chart displays by default in the Chart view when the Optimization node or
Convergence Criteria node is selected in the Outline view. The rendering and logic of the chart varies
according to whether you are using a multiple-objective or single-objective optimization method.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
162 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal Driven Optimization Charts and Results

The chart is updated after each iteration, so you can use it to monitor the progress of the optimization.
When the convergence criteria have been met, the optimization stops and the chart remains available.

Related Topics:
Using the Convergence Criteria Chart for Multiple-Objective Optimization
Using the Convergence Criteria Chart for Single-Objective Optimization

Using the Convergence Criteria Chart for Multiple-Objective Optimization


When a multiple-objective optimization method such as MOGA or AMO is being used, the Convergence
Criteria chart plots the evolution of the Maximum Allowable Pareto Percentage and the Convergence
Stability Percentage convergence criteria. For detailed information on these criteria, see Convergence
Criteria in MOGA-Based Multi-Objective Optimization (p. 245).

Note

If an external optimizer that supports multiple objectives is being used, the Convergence
Criteria chart will display the data that is available.

Before running your optimization, you can specify values for these convergence criteria. To do so, select
the Optimization node in the Outline view and then edit the values in Optimization section of the
Properties view.

The Convergence Criteria chart for a multiple-objective optimization is notated as follows:

The number of iterations is displayed along the X-axis.

The convergence criteria percentage is displayed along the Y-axis.

The Maximum Allowable Pareto Percentage is represented by a solid red line.

The evolution of the Pareto Percentage is represented by a dashed red curve.

The Convergence Stability Percentage is represented by a solid green line.

The Stability Percentage is represented by a dashed green curve.

For MOGA, the convergence criteria values are displayed in the legend.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 163
Using Goal Driven Optimization

You can control the display of the chart by enabling or disabling the convergence criteria. To do so,
select the Convergence Criteria node in the Outline view. In the Criteria section of the Properties
view, select or deselect the Enabled check box for each criterion to specify whether it should be displayed
on the chart.

Using the Convergence Criteria Chart for Single-Objective Optimization


When a single-objective optimization method such as NLPQL, MISQP, or ASO is being used, the Conver-
gence Criteria chart plots the evolution of the best candidate point for the optimization by identifying
the best point for each iteration.

Note

The convergence criteria for these methods, the Allowable Convergence Percentage for
NLPQL and MISQP and the Convergence Tolerance for ASO, have variations that are too
small (10-6) to be easily displayed on the chart. This will be addressed in a future release.

Before running your optimization, you can specify values for the convergence criteria relevant to your
selected optimization method. (Although these criteria will not be explicitly shown on the chart, they
will affect the optimization and the selection of the best candidate.) To do so, select the Optimization
node in the Outline view and then edit the values in Optimization section of the Properties view.

The Convergence Criteria chart for a single-objective optimization is notated as follows:

The Number of Iterations is displayed along the X-axis.

The Output Value is displayed along the Y-axis.

The evolution of the Best Candidate is represented by a solid red curve.

Best candidates that are represented by red points along the curve.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
164 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal Driven Optimization Charts and Results

Best candidates that are infeasible points are represented by gray squares along the curve.

Using the History Chart


Once the optimization has been updated for the first time, the History chart allows you to view the
optimization history of an enabled objective/constraint, input parameter, or parameter relationship.

Additionally, it gives you the option of monitoring the progress of the selected object while the optim-
ization is still in progress. If you select an object during an update, the chart refreshes automatically
and shows the evolution of the objective/constraint, input parameter, or parameter relationship
throughout the update. (For the iterative optimization methods, the chart is refreshed after each iteration.
For the Screening method, it is only updated when the optimization update is complete.) You can select
a different object at any time during the update in order to plot and view a different chart.

The History charts remain available when the update is completed. In the Outline view, a sparkline
version of the History chart is displayed for each objective/constraint, input parameter, or parameter
relationship.

If the History chart indicates that the optimization has converged midway through the process, you
can stop the optimization and retrieve the results without having to run the rest of the optimization.
For details, see Retrieving Intermediate Candidate Points (p. 159).

Note

Failed design points are not displayed on the History chart.

You can access the History chart by selecting an objective or a constraint under the Objectives and
Constraints node or an input parameter or parameter relationship under the Domain node of the
Outline view in the Optimization tab.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 165
Using Goal Driven Optimization

Related Topics:
Working with the History Chart in the Chart View
Viewing History Chart Sparklines in the Outline View
Using the Objective/Constraint History Chart
Using the Input Parameter History Chart
Using the Parameter Relationships History Chart

Working with the History Chart in the Chart View


The rendering of the History chart varies, not only according to whether an objective/constraint, input
parameter, or parameter relationship is being charted, but also according to the optimization method
being used. The History chart is notated as follows:

Objective values, which fall within the Optimization Domain defined for the associated parameter,
are listed vertically along the left side of the chart.

Each of the points in the sample set (as defined by the Size of Generated Sample Set property) is
listed horizontally along the bottom of the chart.

The evolution of the object is represented by a red curve.

Bounds for constraints are represented by gray dashed lines.

Target values are represented by blue dashed lines.

You can hover your mouse over any data point in the chart to view the X and Y coordinates.

Screening For a Screening optimization, which is non-iterative, the History chart displays all the
points of the sample set. The chart is updated when all of the points have been evaluated. The plot
reflects the non-iterative process, with each point visible on the chart.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
166 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal Driven Optimization Charts and Results

MOGA For a MOGA optimization, the History chart displays the evolution of the population of points
throughout the iterations in the optimization. The chart is updated with the most recent population
(as defined by the Number of Samples per Iteration property) at the end of each iteration

NLPQL and MISQP For an NLPQL or MISQP optimization, the History chart allows you trace the
progress of the optimization from a defined starting point. The chart displays the objective value asso-
ciated with the point used for each iteration. Note that the points used to evaluate the derivative values
are not displayed. The plot reflects the gradient optimization process, with the point for each iteration
visible on the chart.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 167
Using Goal Driven Optimization

ASO For an Adaptive Single-Objective optimization, the History chart allows you trace the progress
of the optimization through a specified maximum number of evaluations. On the Input Parameter History
chart, the upper and lower bounds of the input parameter are represented by blue curves, allowing
you to see the domain reductions narrowing toward convergence.

The chart displays the objective value corresponding to LHS or verification points. All evaluated points
are displayed on the plot.

AMO For an Adaptive Multiple-Objective optimization, the History chart displays the evolution of
the population of points throughout the iterations in the optimization. Each set of points (the number
of which is defined by the Number of Samples Per Iteration property) displayed on the chart corresponds
to the population used to generate the next population. Points corresponding to real solve are plotted
as black points, and points from the response surface are plotted with a square colored as specified in
the Options dialog (see Response Surface Options (p. 23)). The plot reflects the iterative optimization
process, with each iteration visible on the chart.

All candidate points generated by the optimization are real design points.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
168 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal Driven Optimization Charts and Results

Viewing History Chart Sparklines in the Outline View


When a History chart for an objective/constraint, input parameter, or parameter relationship is generated,
the same data is used to generate a sparkline image of the History chart. The sparklines for defined
objects are displayed in the Outline view and are refreshed dynamically during the update, allowing
you to follow the update progress for all of the objects simultaneously.

The History chart sparkline notation is similar to that of the History chart in the Chart view, as follows:

If no constraints are present, sparklines are gray.

If constraints are present:

Sparklines are green when the constraint or parameter relationship is met.

Sparklines are red when the constraint or parameter relationship is violated. Note that when
parameter relationships are enabled and taken into account, the optimization should not pick un-
feasible points.

Targets are represented by blue dashed lines.

Bounds for constraints are represented by gray dashed lines.

In the image below:

In the Outline view, the sparkline for Minimize P9; P9 <= 14000 N is entirely green, indicating that
the constraints are met throughout the optimization history.

In the Outline view, the sparkline for Maximize P7; P7 >= 13000 is both red and green, indicating
that the constraints are violated at some points and met at others.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 169
Using Goal Driven Optimization

The History chart for the constraint Maximize P7; P7 >= 13000 is shown in the Charts view. The
points beneath the dotted gray Lower Bound line are infeasible points.

Using the Objective/Constraint History Chart


To generate the History chart for an enabled objective or constraint, update the optimization and then
select the objective or constraint under the Objectives and Constraints node in the Optimization tab
Outline view. If you change your selection during the update, the chart refreshes automatically, allowing
you to monitor the update process.

The History chart for an objective or constraint displays a red curve to represent the evolution of the
parameter for which an objective or constraint has been defined. Constraints are represented by gray
dashed lines, and target values by a blue dashed line.

In the example below, the History chart has been plotted for output parameter P9 WB_BUCK in a
MOGA optimization, and the parameter is constrained such that it must have a value lesser than or
equal to 1100. The constraint is represented by the dotted gray line.

Given a Constraint Type of Maximize and an Upper Bound of 1100, the area under the dotted gray
line represents the infeasible domain.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
170 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal Driven Optimization Charts and Results

Using the Input Parameter History Chart


To generate the History chart for an enabled input parameter, update the optimization and then select
the input parameter under the Domain node in the Optimization tab Outline view. If you change your
selection during the update, the chart refreshes automatically, allowing you to monitor the update
process.

For an input parameter, the History chart displays a red curve to represent the evolution of the para-
meter for which the objective has been defined. If an objective or constraint is defined for the parameter,
the same chart will display when the objective/constraint or the input parameter is selected.

In the example below, the History chart has been plotted for input parameter P3 WB_L in a NLPQL
optimization. For P3 WB_L, a Starting Value of 100, a Lower Bound of 90, and an Upper Bound of
110 have been defined. The optimization converged upward to the upper bound for the parameter.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 171
Using Goal Driven Optimization

Using the Parameter Relationships History Chart


To generate the History chart for an enabled input parameter relationship, update the optimization and
then select the parameter relationship under the Domain node in the Optimization tab Outline view.
If you change your selection during the update, the chart refreshes automatically, allowing you to
monitor the update process.

For a parameter relationship, the History chart shows two curves to represent the evolution of the left
expression and right expression of the relationship. The number of points are along the X axis and the
expression values are along the Y axis.

In the example below, the History chart has been plotted for parameter relationship P2 > P1 in a
Screening optimization.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
172 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal Driven Optimization Charts and Results

Using the Candidate Points Results


The Candidate Points results, which are displayed in both the Table and the Chart view, allow you to
view different kinds of information about candidate points. It allows you specify one or more parameters
for which you want to display candidate data. In the Chart view, the legends color-coding allows you
to to view and interpret the samples, candidate points identified by the optimization, candidates inserted
manually, and candidates for which output values have been verified by a design point update. You
can specify the charts properties to control the visibility of each axis, feasible samples, candidates youve
inserted manually, and candidates with verified output values. For details on the results displayed in
the Table view, see Viewing and Editing Candidate Points in the Table View (p. 158).

To generate the Candidate Points results, update the optimization and then select Candidate Points
under the Results node in the Optimization tab Outline view.

Note

If the project was created in an earlier version of ANSYS Workbench version 14.0 or earlier
and you are opening it for the first time, you must migrate the project database manually
before generating the Candidate Points results. To do so, click the migration icon next to
Candidate Points and then update the optimization. The results are generated automatically
and can be viewed by selecting Candidate Points.

Related Topics:
Understanding the Candidate Points Results Display
Candidate Points Results: Properties

Understanding the Candidate Points Results Display


The rendering of the Candidate Points results that are displayed in the Chart view varies according
to your selections in the Properties view. Once the results are generated, you can adjust the properties

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 173
Using Goal Driven Optimization

to change what is displayed. In the results for an NLPQL optimization example below, with Coloring
Method set to by Candidate Type, you can see that both samples and candidate points are displayed
as follows:

Feasible samples are represented by pale orange curves.

The starting point is represented by an orange curve.

Each of the three candidate points generated by the optimization is represented by a green curve.

Each of the custom candidate points is represented by a purple curve.

If you set Coloring Method to by Source Type, the samples will be colored according to the source
from which they were calculated, following the color convention used for data in the Table view.
Samples calculated from a simulation are represented by black curves, while ones calculated from a
response surface are represented by a curve in a custom color specified in the Options dialog (see Re-
sponse Surface Options (p. 23)).

When you move your mouse over the results, you can pick out individual objects, which become
highlighted in orange. When you select a point, the parameter values for the point are displayed in the
Value column of the Properties view.

For each parameter listed across the bottom of the results display, there is a vertical line. When you
mouse over it, two handles appear at the top and bottom of the line. Drag the handles up or down
to narrow the focus down to the parameters ranges that interest you.

When you select a point on the results, the right-click context menu provides options for exporting
data and saving candidate points as design, refinement, verification, or custom candidate points.

Candidate Points Results: Properties


To specify the properties of the Candidate Points results that display in the Table and Chart views,
expand the Results node in the Optimization Outline view. Select Candidate Points and then edit the
properties in the Properties view. Available properties vary according to the type of optimization selected.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
174 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal Driven Optimization Charts and Results

Table
Determines the properties of the results displayed in the Table view. For the Show Parameter Rela-
tionships property, select the Value check box to display parameter relationships in the results Table
view.

Chart
Determines the properties of the results displayed in the Chart view. Select the Value check box to
enable the property.

Display Full Parameter Name: Select to display the full parameter name in the results.

Show Candidates: Select to show candidates in the results.

Show Samples: Select to show samples in the results.

Show Starting Point: Select to show the starting point in the results (NLPQL and MISQP only).

Show Verified Candidates: Select to show verified candidates in the results (Response Surface Optimization
only. Candidate verification is not necessary for Direct Optimization because the points result from a real
solve, rather than an estimation).

Coloring Method: Select whether the results should be colored by candidate type or source type, as follows:

by Candidate Type: Different colors are used for different types of candidate points. Default value.

by Source Type: Output parameter values calculated from simulations are displayed in black. Output
parameter values calculated from a response surface are displayed in a custom color selected in the
Options dialog (see Response Surface Options (p. 23)

Input Parameters
Each of the input parameters is listed in this section. Under Enabled, you can enable or disable each
input parameter by selecting or deselecting the check box. Only enabled input parameters are shown
in the results.

Output Parameters
Each of the output parameters is listed in this section. Under Enabled, you can enable or disable each
output parameter by selecting or deselecting the check box. Only enabled output parameters are shown
in the results.

Generic Chart Properties


You can modify various generic chart properties for the results. See Setting Chart Properties for details.

Using the Sensitivities Chart (GDO)


The Sensitivities chart shows the global sensitivities of the output parameters with respect to the input
parameters. The chart can be displayed as a bar chart or a pie chart by changing the Mode in the chart's
Properties view. You can also select which parameters you want to display on the chart from the
Properties view by selecting or deselecting the check box next to the parameter in the Properties
view.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 175
Using Goal Driven Optimization

Various generic chart properties can be changed for this chart.

Note

The GDO Sensitivities chart is available only for the MOGA optimization method.

If the p-Value calculated for a particular input parameter is above the Significance Level
specified in the Design Exploration section of the Tools > Options dialog, the bar for that
parameter will be shown as a flat line on the chart. See Determining Significance (p. 55) for
more information.

Using the Tradeoff Chart


When an Optimization component is solved, the Tradeoff chart is created by default under the under
the Results node of the Outline view. The Tradeoff chart is a 2D or 3D scatter chart representing the
generated samples. The colors applied to the points represent the Pareto front they belong to, from
red (the worst) to blue (the best points).

The chart is controlled by changing its properties in the Properties view.

The chart can be displayed as in 2D or 3D by changing the Mode in the chart's Properties view.

You can also select which parameter to display on each axis of the chart by selecting the parameter from
the drop down menu next to the axis name in the Properties view.

Limit the Pareto fronts shown by moving the slider or entering a value in the field above the slider in the
chart's Properties view.

Various generic chart properties can be changed for this chart.

Using Tradeoff Studies


In the Tradeoff chart, the samples are ranked by non-dominated Pareto fronts, and you can view the
tradeoffs that most interest you. To make sense of the true nature of the tradeoffs, the plots must be
viewed with the output parameters as the axes. This approach shows which goals can be achieved and
whether this entails sacrificing the goal attainment of other outputs. Typically, a Tradeoff chart shows
you a choice of possible, non-dominated solutions from which to choose.

When an optimization is updated, you can view the best candidates (up to the requested number) from
the sample set based on the stated objectives. However, these results are not truly representative of
the solution set, as this approach obtains results by ranking the solution by an aggregated weighted
method. Schematically, this represents only a section of the available Pareto fronts. Changing the weights
(using the Objective Importance or Constraint Importance property in the Properties view) will display
different such sections. This postprocessing step helps in selecting solutions if you are sure of your
preferences for each parameter.

Figure 1: Three-Dimensional Tradeoff Chart Showing First Pareto Front (p. 177) shows the results of the
tradeoff study performed on a MOGA sample set. The first Pareto front (non-dominated solutions) is
shown as blue points on the output-axis plot. The slider in the Properties view can be moved to the
right to add more fronts, effectively adding more points to the Tradeoff chart. The additional points
added this way are inferior to the points in the first Pareto front in terms of the objectives or constraints
you specified, but in some cases, where there are not enough first Pareto front points, these may be

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
176 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal Driven Optimization Charts and Results

necessary to obtain the final design. You can right-click on individual points and save them as design
points or response points.

Figure 1: Three-Dimensional Tradeoff Chart Showing First Pareto Front

It is also important to discuss the representation of feasible and infeasible points (infeasible points are
available if any of the objectives are defined as constraints). The MOGA algorithm always ensures that
feasible points are shown as being of better quality than the infeasible points, and different markers
are used to indicate them in the chart. In the 2-D and 3-D Tradeoff chart, colored rectangles denote
feasible points and gray circles denote infeasible ones. You can enable or disable the display of infeasible
points in the chart Properties view.

Also, in both types of charts, the best Pareto front is blue and the fronts gradually transition to red
(worst Pareto front). Figure 2: Two-Dimensional Tradeoff Chart Showing Feasible and Infeasible
Points (p. 178) is a typical 2-D Tradeoff chart with feasible and infeasible points.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 177
Using Goal Driven Optimization

Figure 2: Two-Dimensional Tradeoff Chart Showing Feasible and Infeasible Points

For more information on Pareto fronts and Pareto-dominant solutions, see Principles (GDO) (p. 225).

Using the Samples Chart


The Samples chart is a postprocessing feature that allows you to explore a sample set given defined
goals. After solving an Optimization cell, a Samples chart will appear in the Outline under Results.

The aim of this chart is to provide a multidimensional graphical representation of the parameter space
you are studying. The chart uses the parallel Y axes to represent all of the inputs and outputs. Each
sample is displayed as a group of line curves where each point is the value of one input or output
parameter. The color of the curve identifies the Pareto front that the sample belongs to, or the chart
can be set so that the curves display the best candidates and all other samples.

Compared to the Tradeoff chart, the Samples chart has the advantage of showing all the parameters
at once, whereas the Tradeoff chart can only show three parameters at a time. The Samples chart is
better for exploring the parameter space.

The Samples chart is a powerful exploration tool because of its interactivity. With the Samples chart
axis sliders, you can filter for each parameter very easily, providing an intuitive way to explore the al-
ternative designs. Samples are dynamically hidden if they fall outside of the bounds. Repeating the
same operation with each axis allows you to manually explore and find trade-offs.

The Samples chart has two modes: to display the candidates with the full sample set, or to display the
samples by Pareto front (same as the Tradeoff chart). If you select the Pareto Fronts mode, the Coloring
Method property allows you to specify if the samples should be colored according to sample Pareto
Front.

In the image below, the Samples chart is in Pareto Fronts mode and is colored by Pareto Front. The
gradient ranges from blue (best) to red (worst)

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
178 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal Driven Optimization Charts and Results

In the Candidates mode, the Samples chart Coloring Method is always by Candidates, as shown in
the image below.

Samples Chart: Properties


To specify the properties of a Samples chart, expand the Results node in the Optimization Outline
view. Select Samples and then edit the chart properties in the Properties view. Note that available
properties vary according to the type of optimization selected.

Chart Properties

Determine the properties of the chart. Select the Value check box to enable the property.

Display Full Parameter Name: Select to display the full parameter name on the chart.

Mode: Specify whether the chart will be displayed by candidates or Pareto fronts. If Pareto Fronts is se-
lected, the following option is available:

Coloring Method: Specify whether chart is colored by Samples or by Pareto Fronts.

Number of Pareto Fronts to Show: Specify the number of Pareto fronts to be displayed on the chart.

Input Parameters

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 179
Using Goal Driven Optimization

Each of the input parameters is listed in this section. Under Enabled, you can enable or disable each
input parameter by selecting or deselecting the check box. Only enabled input parameters are shown
on the chart.

Output Parameters

Each of the output parameters is listed in this section. Under Enabled, you can enable or disable each
output parameter by selecting or deselecting the check box. Only enabled output parameters are shown
on the chart.

Generic Chart Propeties

You can modify various generic chart properties for this chart. See Setting Chart Properties for details.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
180 of ANSYS, Inc. and its subsidiaries and affiliates.
Using Six Sigma Analysis
This section contains information about running a Six Sigma Analysis. For more information, see Six
Sigma Analysis Component Reference (p. 46) and Understanding Six Sigma Analysis (p. 257).
Performing a Six Sigma Analysis
Using Statistical Postprocessing
Statistical Measures

Performing a Six Sigma Analysis


To perform a Six Sigma analysis, you must first add a Six Sigma Analysis template to your schematic
from the Design Exploration section of the toolbox. You then edit the Design of Experiments (SSA)
cell of the Six Sigma Analysis and specify which input parameters to evaluate, assign distribution
functions to them as described in Setting Up Uncertainty Variables (p. 191), and specify the distribution
attributes (this will generate your samples for each input variable). Once this is completed, the procedure
for a Six Sigma Analysis is:

Solve the Six Sigma Analysis by:

updating each individual cell in the analysis either from the right menu in the Project Schematic or
the Update button in the tab toolbar while editing the cell.

right-clicking on the Six Sigma Analysis cell in the Project Schematic and selecting Update to update
the entire analysis at once.

clicking the Update Project button in the Project Schematic toolbar to update the entire project.

Using Statistical Postprocessing


From the Six Sigma Analysis tab you can see the statistics in the Properties view, the probability tables
in the Table view (Tables (SSA) (p. 181)), and statistics charts in the Chart view (Using Parameter Charts
(SSA) (p. 182)) associated with each parameter by selecting a parameter in the Outline. You can open
the Sensitivities chart by selecting it from the Charts section of the Outline (see Using the Sensitivities
Chart (SSA) (p. 182)).

Related Topics:
Tables (SSA)
Using Parameter Charts (SSA)
Using the Sensitivities Chart (SSA)

Tables (SSA)
On the Six Sigma Analysis tab, you are able to view the probability tables of any input or output variable.
To select which table view is displayed, select the desired value (Quantile-Percentile or Percentile-
Quantile) for the Probability Table field in the parameter's Properties view.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 181
Using Six Sigma Analysis

You can modify both types of tables by adding or deleting values. To add a value to the Quantile-Per-
centile table, type the desired value into the New Parameter Value cell at the end of the table. A row
with the value you entered will be added to the table in the appropriate location. To add a new value
to the Percentile-Quantile table, type the desired value into the appropriate cell (New Probability Value
or New Sigma Level) at the end of the table. To delete a row from either table, right-click on the row
and select Remove Level from the menu. The row will be deleted from the table. You can also overwrite
any value in an editable column and corresponding values will be displayed in the other columns in
that row.

Using Parameter Charts (SSA)


You can review the statistical results of the analysis by selecting an input or output parameter to view
its Chart. The results of a Six Sigma Analysis are visualized using histogram plots and cumulative distri-
bution function plots.

Various generic chart properties can be changed for this chart.

Using the Sensitivities Chart (SSA)


If you select Sensitivities from the Outline view of the Six Sigma tab, you can review the sensitivities
derived from the samples generated for the Six Sigma Analysis. The Six Sigma Analysis sensitivities are
Global Sensitivities, not Local Sensitivities. In the Properties view for the Sensitivities chart, you can
choose the output parameters for which you want to review sensitivities, and the input parameters that
you would like to evaluate for the output parameters.

Various generic chart properties can be changed for this chart.

Note

If the p-Value calculated for a particular input parameter is above the Significance Level
specified in the Design Exploration section of the Tools > Options dialog, the bar for that
parameter will be shown as a flat line on the chart. See Determining Significance (p. 55) for
more information.

Statistical Measures
When the Six Sigma Analysis is updated, the following statistical measures are displayed in the Properties
view for each parameter

Mean

Standard Deviation

Skewness

Kurtosis

Shannon Entropy (Complexity)

Signal-Noise Ratio (Smaller is Better)

Signal-Noise Ratio (Nominal is Best)

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
182 of ANSYS, Inc. and its subsidiaries and affiliates.
Statistical Measures

Signal-Noise Ratio (Larger is Better)

Sigma Minimum

Sigma Maximum

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 183
Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
184 of ANSYS, Inc. and its subsidiaries and affiliates.
Working with DesignXplorer
This section provides information on the following topics:
Working with Parameters
Working with Design Points
Working with Sensitivities
Working with Tables
Working with Remote Solve Manager and DesignXplorer
Working with Design Exploration Project Reports

Working with Parameters


General Information
Parameters are exposed from the individual analysis applications. Edit the cells (DOE, Response Surface,
Parameters Correlation, etc.) of a design exploration template to see the available input and output
parameters in the Outline view. You can set options for those parameters (such as upper and lower
limits) by clicking on the parameter in the Outline and then setting the options for that parameter in
the Properties view. The Properties view will also give you additional information about the parameters.

Note

If you modify your analysis database after a design exploration solution (causing the para-
meters to change), your design exploration studies may not reflect your analysis anymore.
If you modify your analysis, the Update Required symbol will appear on the DesignXplorer
cells to indicate that they are out of date.

Parameters as Design Variables for Optimization


When you create a design exploration study, DesignXplorer captures the current value of each input
parameter. The initial value becomes the default value of the parameter in DesignXplorer and is used
to determine the default parameter range (+/- 10% of the parameter's initial value). These default values
may not be valid in certain situations, so be sure to verify that the ranges are valid with respect to the
input geometry and Analysis System scenario.

Effects of a Parameter Change on Design Exploration Systems


When the parameters or the project are changed, the design exploration systems are turned to the
Refresh required state. On the Refresh operation, the design exploration results can be partially or
completely invalidated depending on the nature of the change. This includes design points results and
cache of design point results which means the Update operation will recalculate all of them.

When a direct input or direct output parameter is added or deleted in the project, or when the unit
of a parameter is changed, all results including design points results and cache of design point results
are invalidated on Refresh. All design points will be recalculated on Update.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 185
Working with DesignXplorer

When non-parametric changes occur in the model (e.g. change to the direction of a non-parameterized
force, the shape of the geometry, the value of a non-parameterized dimension of the model, etc.),
all results including design points results and cache of design point results are invalidated on Refresh.
All design points will be recalculated on Update.

When a derived output parameter is added or deleted in the project, or when the expression of a
derived output parameter is modified, all results but the cache of design point results are invalidated
on Refresh. So, on Update, all design point results will be retrieved without any new calculation. The
rest of the design exploration results are recalculated.

Because DesignXplorer is primarily concerned with the range of variation in a parameter, changes to
a parameter value in the model or the Parameter Set bar are not updated to existing design explor-
ation systems with a Refresh or Update operation. The parameter values that were used to initialize
a new design exploration system remain fixed within DesignXplorer unless you change them manually.

Note

In some situations, non-parametric changes may not be detected and it is necessary to perform
a Clear Generated Data operation followed by an Update operation in order to synchronize
the design exploration systems with the new state of the project. For example, editing the
input file of a Mechanical APDL system outside of Workbench is a non-parametric change
that is not detected. In some rare cases, Inserting, Deleting, Duplicating or Replacing systems
in a project may not be as reliably detected.

You can change the way that units that are displayed in your design exploration systems
from the Units menu. Changing the units display in this manner simply causes the existing
data in each system to be displayed in the new units system, and does not require an update
of the design exploration systems.

Related Topics:
Input Parameters
Setting Up Design Variables
Setting Up Uncertainty Variables
Output Parameters

Input Parameters
By defining and adjusting your input parameters, you define the analysis of the model under investigation.
This section discusses how to work with input parameters, and addresses the following topics:

Defining Discrete Input Parameters (p. 187)

Defining Continuous Input Parameters (p. 188)

Defining Manufacturable Values (p. 188)

Changing Input Parameters (p. 190)

Changing Manufacturable Values (p. 191)

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
186 of ANSYS, Inc. and its subsidiaries and affiliates.
Working with Parameters

Defining Discrete Input Parameters


A discrete parameter physically represents a configuration or state of the modelfor example, the
number of holes or the number of weld points in a geometry. When you define a discrete parameter,
you can define two or more Levels that contain the parametric values to be used in the analysis.

To define a parameter as discrete, select the parameter in the Outline view for the DOE. Then, in the
Properties view, select Discrete as the Classification for the parameter. By default, the Number of
Levels cell in the Properties view is set to 2, the minimum allowable level. These levels are displayed
in the Table view.

The Number of Levels cannot be changed in the Properties view. To change the Number of Levels
for the parameter, you must add or delete a Level in the Table view. You can also edit the values for
existing Levels in the Table view.

To add a Level, select the empty cell in the bottom row of the Discrete Value column, type in an integer,
and press Enter. The Number of Levels in the Properties view is updated automatically.

To delete a Level, right-click on any part of the row containing the Level to be removed and select Delete
from the context menu.

To change the value of a Level, select the cell containing the value to be changed, type in an integer, and
press Enter.

Note

The Discrete Value column of the Table view is not sorted as you enter or delete Levels or
edit Level values. To sort it manually, click the down-arrow on the right of the header cell.
Once you sort the column, the values will auto-sort as you add Levels or edit Level values.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 187
Working with DesignXplorer

Defining Continuous Input Parameters


A continuous parameter is one that physically varies in a continuous manner within a specific range of
analysis. The range is defined by upper and lower bounds which default to +- 10% of the initial value
of the parameter. You can edit the default bound values to adjust the range.

To define a parameter as continuous, select the parameter in the Outline view for the DOE. Then, in
the Properties view, select Continuous as the Classification for the parameter. By default, the Value
cell of the Properties table is populated with the parameter value defined in the Parameters table.
This value cannot be edited in the DOE unless the parameter is disabled (deselected in the Outline
view).

The values in the Lower Bound and the Upper Bound cells in the Properties view define the range
of the analysis. To change the range, select the cell containing the value to be changed, type in a
number, and press Enter. You are not limited to entering integers.

When you define the range for a continuous input parameter, keep in mind that the relative variation
must be equal to or greater than 1e-10 in its current unit. If the relative variation is less than 1e-10, you
can either adjust the variation range or disable the parameter.

To adjust the variation range, select the parameter in the Outline view and then edit the values in the
Lower Bound and Upper Bound cells of the Properties view.

To disable the parameter, deselect the Enable check box in the Outline view.

Defining Manufacturable Values


A Manufacturable Values filter can be applied to continuous parameters to ensure that real-world man-
ufacturing or production constraints are taken into account during postprocessing. When you apply
the Manufacturable Values filter to a parameter, you can define two or more Levels that contain para-
metric values that are feasible for manufacturing. Only these specified values will be used for analysis
of the sample setfor example, Optimization will only return designs based on Manufacturable Values.

Note

The Manufacturable Values filter is currently only available for the Screening optimization
method.

To apply Manufacturable Values to a continuous parameter, select the parameter in the Outline view
for the DOE. Then, in the Properties view, select the Use Manufacturable Values check box. By default:

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
188 of ANSYS, Inc. and its subsidiaries and affiliates.
Working with Parameters

The Number of Levels cell in the Properties view is set to 2, the minimum allowable level. These levels
are displayed in the Table view.

The Value cell of the Properties table is populated with the parameter value defined in the Parameters
table. This value cannot be edited in the DOE.

Defining the Range for Manufacturable Values


The values in the Lower Bound and the Upper Bound cells in the Properties view define the range
of the postprocessing analysis. To change the range, select the cell containing the value to be changed,
type in a number, and press Enter. You are not limited to entering integers.

When you define the range for a continuous input parameter, keep in mind that the relative variation
must be equal to or greater than 1e-10 in its current unit. If the relative variation is less than 1e-10, you
can either adjust the variation range or disable the parameter.

To adjust the variation range, select the parameter in the Outline view and then edit the values in the
Lower Bound and Upper Bound cells of the Properties view.

To disable the parameter, deselect the Enable check box in the Outline view.

Note

When you adjust the range, Manufacturable Values falling outside the new range are removed
automatically and all results are invalidated.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 189
Working with DesignXplorer

Defining the Number of Levels for Manufacturable Values


The Number of Levels cannot be changed in the Properties view. To change the Number of Levels
for the parameter, you must add or delete a Level in the Table view. You can also edit the values for
existing Levels in the Table view.

To add a Level, select the empty cell in the bottom row of the Manufacturable Values column, type in
a number, and press Enter. The Number of Levels in the Properties view is updated automatically.

If you enter a value outside of the range defined in the Properties view, you can opt to either auto-
matically extend the range to encompass the new value or cancel the commit of the new value. If
you opt to extend the range, all existing DOE results and design points will be deleted.

To delete a Level, right-click on any part of the row containing the Level to be removed and select Delete
from the context menu. Note that when you delete the Level representing either the upper or the lower
bound, the range is not narrowed.

If you enter a value outside of the range defined in the Properties view, you can opt to either auto-
matically extend the range to encompass the new value or cancel the commit of the new value. If
you opt to extend the range, all existing DOE results and design points will be deleted.

To change the value of an existing Level, select the cell containing the value to be changed, type in an
integer, and press Enter.

For information on disabling and re-enabling the Use Manufacturable Values filter, see Changing
Manufacturable Values (p. 191).

Changing Input Parameters


You can make changes to an input parameter in the first cell of the system (DOE, DOE (SSA), or
Parameters Correlation cell). In the Outline view of the tab, select the input parameter that you want
to edit. Most changes will be made using the Properties view for that parameter. Changes made for
parameters will only affect the parameter information for the current system and any systems that share
a project cell (DOE, Response Surface) from that system.

An input parameter may be selected or deselected for use in that system by checking the box to the right
of it in the DOE Outline view.

In the Properties view, specify further attributes for the selected input parameter, such as a range (upper
and lower bounds) of values for a continuous input parameter or values for a discrete input parameter.

When you make any of the following changes to an input parameter in the first cell of the system (DOE
or Parameters Correlation cell):

enable or disable an input parameter

change the Classification of an input parameter from continuous to discrete, or vice versa

add or remove a Level of a discrete parameter

change the range definition of an input parameter

All generated data associated with the system that contains the modified parameter will be cleared.
This means for instance, for a GDO system, generated data in all cells of the GDO system will be cleared

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
190 of ANSYS, Inc. and its subsidiaries and affiliates.
Working with Parameters

if you make one of these changes in the DOE cell. A dialog box is displayed to confirm the parameter
change before clearing all generated data. The change is not made if you select No in the dialog box.

Note

Using the DOE Method, the number of generated design points in the DOE Matrix is directly
related to the number of selected input parameters. The design and analysis Workflow shows
that specifying many input parameters will make heavy demands on computer time and re-
sources, including system analysis, DesignModeler geometry generation, and/or CAD system
generation. Also, large input parameter ranges may lead to inaccurate results.

Changing Manufacturable Values


For a continuous parameter with Manufacturable Values, you can enable or disable the Manufacturable
Values filter by selecting or deselecting the Use Manufacturable Values check box in the Properties
view of the DOE.

When you deselect the Use Manufacturable Values check box, the Levels will be removed from the
Tables view. The range will still display in the Properties view, but Manufacturable Values will not be
used.

When you reselect the Use Manufacturable Values check box, the Table view will display previous list
of Levels.

Once results are generated, you can toggle back and forth between enabled and disabled without in-
validating the DOE or Response Surface. You can also add, delete, or edit Manufacturable Values without
needing to regenerate the entire DOE and Response Surface, provided that you dont alter the range.
So long as the range has not been altered, the information from previous updates is reused.

The following types of results, however, are based on the Manufacturable Values you defined and must
be regenerated after any changes to the Manufacturable Values:

Response points and related charts

Min/Max objects

Optimization candidates and charts

Setting Up Design Variables


A range (upper and lower bounds) of values is required for all continuous input parameters. Also, values
are required for discrete input parameters or continuous input parameters with Manufacturable Values.

Setting Up Uncertainty Variables


For uncertainty variables (used in a Six Sigma Analysis), you must specify the type of statistical distribution
function used to describe its randomness as well as the parameters of the distribution function. For the
distribution type, you can select one of the following:

Uniform

Triangular

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 191
Working with DesignXplorer

Normal

Truncated Normal

Lognormal

Exponential

Beta

Weibull

For more information on the distribution types, see Distribution Functions (p. 263).

In the example below of a beam supporting a roof with a snow load, you could measure the snow
height on both ends of the beam 30 different times. Suppose the histograms from these measurements
look like the figures given below.

Figure 3: Histograms for the Snow Height H1 and H2

6.0E-01

5.0E-01
Relative Frequency

4.0E-01

3.0E-01

2.0E-01

1.0E-01

0.0E+00
01

02

02

02

02
E+

E+

E+

E+

E+
07

22

03

85

66
4.

1.

2.

2.

3.

Snow Height H1

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
192 of ANSYS, Inc. and its subsidiaries and affiliates.
Working with Design Points

8.0E-01
7.0E-01
6.0E-01
Relative Frequency

5.0E-01
4.0E-01
3.0E-01
2.0E-01
1.0E-01
0.0E+00
01

02

02

02

02

03
E+

E+

E+

E+

E+

E+
85

95

92

89

86

08
9.

2.

4.

6.

8.

Snow Height H2 1.

From these histograms, you can conclude that an exponential distribution is suitable to describe the
scatter of the snow height data for H1 and H2. Suppose from the measured data we can evaluate that
the average snow height of H1 is 100 mm and the average snow height of H2 is 200 mm. The parameter
can be directly derived by 1.0 divided by the mean value, which leads to 1 = 1/100 = 0.01 for H1,
and 1 = 1/200 = 0.005 for H2.

Output Parameters
Each output parameter corresponds to a response surface which is expressed as a function of the input
parameters. Some typical output parameters are: Equivalent Stress, Displacement, Maximum Shear
Stress, etc.

When viewing output parameters in the Outline of the various design exploration tabs (DOE, Response
Surface, etc.), you will see maximum and minimum values for each output parameter in the Properties
view when you click on the parameter cell in the Outline view. Please note that the maximum and
minimum values that you see in the Properties view for a parameter depend on the state of the design
exploration system.

The minimum and maximum values displayed as Properties of the outputs are the best min/max
values available in the context of the current cell; the best between what the current cell eventually
produced and what the parent cell provided. A DOE cell produces design points and a min/max can
be extracted from those. A Response Surface produces Min-Max Search results if this option is enabled.
If a refinement is run, new points are generated and a best min/max can be extracted from those. An
Optimization produces a sample set and again, a better min/max than what is provided by the parent
Response Surface can be found in these samples. So please keep in mind the state of the design explor-
ation system when viewing the minimum and maximum values in the parameter properties.

Working with Design Points


When a design exploration system is updated, it creates design points in the Parameter Set and requests
Workbench to update them in order to obtain the corresponding output parameter values. These solved
design points are the input data used by ANSYS DesignXplorer to calculate its own parametric results
(e.g. response surface, sensitivities, optimum designs, etc.)

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 193
Working with DesignXplorer

The number and the definition of the design points created depend on the number of input parameters
and the properties of the design exploration system DOE. See Using a Central Composite Design
DOE (p. 71) for more information.

You can preview the generated design points by clicking the corresponding Preview button toolbar
before updating a Design of Experiments, a Response Surface (if performing a refinement), or another
design exploration feature that generates design points during update.

Some design exploration features provide the ability to edit the list of points, and even to edit the
output parameter values of the points. See Working with Tables (p. 205) for more information.

Before updating design points, you can specify the following details about the update process:

The order in which the design points will be updated. See Design Point Update Order (p. 195) for more
information.

The location in which the update operation will be processed. See Design Point Update Location (p. 195)
for more information.

You can update a design exploration system or cell in several ways:

In the corresponding tab, click the Update button in the toolbar.

In the corresponding tab, right-click the component node in the Outline view and select Update from
the context menu.

From the Project Schematic, right-click on the cell and select Update from the context menu.

In the Project Schematic, click the Update Project button in the toolbar to update all of the systems in
the project.

Note

The Update All Design Points button in the Project Schematic toolbar is only intended to
update the design points listed in the Parameter Sets Table of Design Points. This operation
does not update the points listed in the design exploration tables.

During the update operation, the design points are updated simultaneously if the analysis system is
configured to perform simultaneous solutions; otherwise they are updated sequentially.

When the Design of Experiments, Response Surface, or Parameters Correlation component is updated,
the Table of Design Points is updated dynamically: the generated points appear and their results are
displayed as the points are solved.

As each design point is updated, its parameter values are written to a CSV log file that can later be
imported back into the Table of Design Points in order for you to use the data to continue your work
with the Response Surface. For more information, see Design Point Log Files (p. 198).

Related Topics:
Design Point Update Order
Design Point Update Location
Preserving Generated Design Points to the Parameter Set
Exporting Generated Design Points to Separate Projects
Inserting Design Points

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
194 of ANSYS, Inc. and its subsidiaries and affiliates.
Working with Design Points

Cache of Design Point Results


Raw Optimization Data
Design Point Log Files
Failed Design Points

Design Point Update Order


You can specify the default order in which design points will be updated, either starting from DP0 (the
default) or from the previous design point. To set the default, right-click on the Parameter Set bar,
select Properties, and in the Properties view select a value for the Design Point Update Order setting.

You can also specify in which order the design points are submitted for update to Workbench so that
the efficiency of the operation is improved. By default, DesignXplorer is already optimizing the order
for DOEs that it creates, but you can modify it in different ways via the Table of Design Points:

By editing the values in the Update Order column. If this column is not visible, right-mouse click in the
table and select Show Update Order to make it visible.

By sorting the table by one or several columns and then right-mouse clicking in the table and selecting
Set Update Order by Row. This option will regenerate the Update Order for each design point to match
the sorting of the table. Note that sorting by column is not available when the table view contains several
sheets of data.

Automatically, by right-mouse clicking in the table and selecting Optimize Update Order. This option
will analyze the parameter dependencies in the project and scan the parameter values across all design
points in order to determine an optimal order of update. This operation modifies the Update Order value
for each design point and refreshes the table.

Before each design point update is performed, an alert will be displayed if the design point update order
has been changed from the recommended setting. If you do not want to be shown this message before
each design point update for which the recommended design point update order has been modified,
you can disable it in the Design Exploration Messages section of the Workbench Options dialog (accessed
via Tools > Options > Design Exploration).

See Design Point Update Order in the Workbench User's Guide for more information.

Note

The primary goal of Optimize Update Order is to reduce the number of geometry and mesh
system updates. In general, the optimized order is driven by the order of the update tasks,
which means that the components will be updated depending on their order in the system,
their data dependencies, their parameter dependencies and the state of the parameters. This
works best in a horizontal schematic with geometry as the first separate system. In a single
vertically integrated system project, the parameters defined in the upper components will
be considered the most important. In some cases, because of the order of parameter creation
or the presence of engineering data parameters, you may find that the first columns are not
geometry parameters. In this case, we recommend that you sort by the geometry parameters
instead using the Optimize Update Order option.

Design Point Update Location


In a new project, by default the Design Point Update Process settings are populated with the global
Workbench solution preferences defined in the Tools>Options>Solution Process dialog. You can

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 195
Working with DesignXplorer

specify where a design point update will be processed, indicating the location in which the update
operation will be run; this location can differ from the update location of solution cells.

To specify the update location for design points:

1. Right-click the Parameter Set bar.

2. Select Properties.

3. Under Design Point Update Process in the Properties view, select a value from the Update Option
drop-down. Available values are:

Run in Foreground: The update is run in the current ANSYS Workbench session.

Submit to Remote Solve Manager: The update is submitted to Remote Solve Manager (RSM), a job-
queuing system that distributes tasks to available computing resources. The cell being updated enters
a Pending state. For more information, see Working with Remote Solve Manager and
DesignXplorer (p. 208).

For more detailed information about available Design Point Update Option values, see Solution Process
in the Workbench User's Guide.

By following these steps, you can change the location for design point updates at any time. Because
you can easily edit this setting at the project-level, you can submit successive design point updates to
different computing resources as needed.

Preserving Generated Design Points to the Parameter Set


During the update of a design exploration system, the generated design points are temporarily created
in the project, and listed in the Table of Design Points of the Parameter Set bar. Once the operation
is completed, each generated design point is removed from the project unless it has failed to update.
In this case it is preserved to facilitate further investigation.

DesignXplorer allows you to preserve generated design points so they will be automatically saved to
the Parameter Set for later exploration or reuse. The preservation of design points must be enabled
first at the project level, and then can be configured for individual components.

To enable this functionality at the project level, go to Tools > Options and select Design Exploration.
Under Design Points, select the Preserve Design Points After DX Run check box, indicating that
design points should be saved. When you opt to preserve the design points, the design points will
be available in the Parameter Set.

To enable this functionality at the component level, right-click on the component and select Edit. In
the Properties view under Design Points, select the Preserve Design Points After DX Run check
box.

When design points have been preserved, they are included in the Parameter Set Table of Design
Points. Any DesignXplorer points that display in any of the component Table views (DOE points, refine-
ment points, direct correlation points, and candidate points, etc.) indicate whether they correspond to
a design point in the Parameter Set Table of Design Points (i.e., whether they share the same input
parameter values). When a correspondence exists, the points Name specifies the design point to which
it is related. If the source design point is deleted from the Parameter Set or the definition of either
design point is changed, the link between the two points is broken (without invalidating your model
or results) and the indicator is removed from the points name.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
196 of ANSYS, Inc. and its subsidiaries and affiliates.
Working with Design Points

Exporting Generated Design Points to Separate Projects


In addition to preserving the design points in the project, you can also export the preserved design
points from the project. If you check the Export Project for Each Preserved Design Point option in
the Properties view under the Preserve Design Points After DX Run property, in addition to saving
the design points to the project Table of Design Points, each of the design exploration design points
is exported as a separate project in the same directory as the original project.

Note

If you run a design exploration analysis and export design points, and then you uncheck the
option to export them, any design points that were marked for export and are reused in
ensuing analyses will still be exported, although newly created design points will not be ex-
ported.

Inserting Design Points


Design points can be added into the project using the Insert as Design Point(s) option from the right-
click menu on the rows of the Table view, or from various charts. Some of the different charts and
tables that allow this operation are: Optimization candidate table, Response Surface Min-Max Search
table, Samples chart, Tradeoff chart, Response chart, and Correlation Scatter chart.

Right-click on one or more table rows or a chart point and select Insert as Design Point(s) from the
context menu. Insert as Design Point(s) creates new design points at the project level by copying the
input parameter values of the selected Candidate Design(s). The output parameter values are not copied
because they are approximated values provided by the Response Surface. Edit the Parameter Set in
the Project Schematic view to view the existing design points.

The above insertion operations are available from Optimization charts, depending on the context. For
instance, it is possible to right-click on a point in a Tradeoff chart in order to insert the corresponding
sample as a design point via the context menu. The same operation is available from a Samples chart,
unless it is being displayed as a Spider chart.

Note

If your cell is out of date, the information (charts and tables) you see is out of date, but you
can still manipulate those charts and tables with the old data (change the navigator to explore
different response points) and insert design points.

Cache of Design Point Results


To reduce the total time required to perform parametric analyses in a project, ANSYS DesignXplorer
stores the design points that it successfully updated in its design points cache.

As a consequence, if the same design points are reused when previewing or updating a design explor-
ation system, they immediately show up as up-to-date and the cached output parameter values are
displayed.

The cached data are invalidated automatically when relevant data changes occur in the project. However,
you can also force the cache to be cleared right-clicking on the white space of the Project Schematic
and selecting Clear Design Points Cache for All Design Exploration systems from the context menu.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 197
Working with DesignXplorer

Raw Optimization Data


DesignXplorer allows you to monitor design point data for Direct Optimizations, both while the optim-
ization is in process and after it has completed.

While a Direct Optimization system is updating, the design point data calculated by DesignXplorer is
displayed in the Table view. Design points pulled from the cache are also displayed. In most cases, the
Table view is refreshed dynamically as design points are submitted for update and as they are updated.
If the design points are updated via Remote Solve Manager, however, the results are available and the
Table view is refreshed only when the RSM job is completed.

Once the Direct Optimization is completed, the raw design point data is saved. To display this data in
the Table view, select the Raw Optimization Data node in the Outline view. The data is not editable,
but you can export it to a CSV file by right-clicking inside the table and selecting the Export Data
context option (for details, see Extended CSV File Format (p. 281)). Once the table data is exported, you
can then import it as a custom DOE.

You can also right-click on one or more cells within the table data and select one of the following options:

Insert as Design Point: Creates a new design point at the project level by copying the input para-
meter values of the selected candidate point(s). The output parameter values are not copied because
they are approximated values provided by the Response Surface.

Insert as Custom Candidate Point : Creates new custom candidate points into the Table of Candidate
Points by copying the input parameter values of the selected candidate point(s).

Note

The design point data is in raw format, which means it is displayed without analysis or op-
timization results and so does not show feasibility, ratings, Pareto fronts, etc.

Design Point Log Files


DesignXplorer saves design point update data. As each design point is solved, DesignXplorer immediately
writes its full definition (i.e., both input and output parameter values) to a design point log file. This
occurs when you do any of the following things:

Update the Current design point by right-clicking it and selecting Update Selected Design Point from
the right-click context menu.

Note

The Update Project operation is processed differently than a design point update, so
when you update the whole project, design point data is not logged. You must use the
right-click context menu to log data on the Current design point.

Update either selected or all design points via the Update Selected Design Point option of the right-
click context menu.

Update a design exploration system such as Design of Experiments.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
198 of ANSYS, Inc. and its subsidiaries and affiliates.
Working with Design Points

Open an existing project. Data for all of the up-to-date design points in the project is immediately logged
to the file.

Formatting
The generated log file is in the Extended CSV File Format used by ANSYS DesignXplorer to export table
and chart data and to import data from external CSV files to create new design, refinement, and verific-
ation points. It is primarily formatted according to the Comma-Separated Values standard (file extension
CSV), but also supports several additional non-standard formatting conventions. For details, see Extended
CSV File Format (p. 281).

File Location
The log file is named DesignPointLog.csv and is written to the user_files directory of the
Workbench project. You can locate the file in the Files view of Workbench via the View > Files menu
option.

Importing the File


Since the design point log file is in the Extended CSV File Format used elsewhere by DesignXplorer, you
can import the design point data back into the Table of Design Points in the Design of Experiments
component of any design exploration system.

Note

In order to import data from the design point log file:

Set the DOE type to Custom.

The list of parameters in the file must exactly match the order and parameter names (i.e.,
P1, P7, P3, etc.) in DesignXplorer. To import the log file, you may need to manually extract
a portion, using the process described below.

Manually Extracting Part of the Log File


1. Identify the column order and parameter names in used by DesignXplorer. You can do this by either of
the following methods:

Review the column order and parameter names in the header row of the Table view.

Export the first row of your custom DOE to create a file with the correct order and parameter names
for the header row.

2. Find DesignPointLog.csv in the user_files directory of the Project Directory and compare it
to your exported DOE file. Verify that the column order and parameter names exactly match those in
DesignXplorer.

3. If necessary, update the column order and parameter names in the design point log file.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 199
Working with DesignXplorer

4. If parameters were added or removed from the project, the file will have several blocks of data to reflect
this that are distinguished by header lines. Manually remove unnecessary sections from the file, keeping
only the block of data that is consistent with your current parameters.

Note

The header line is produced when the log file is initially created and reproduced
whenever a parameter is added or removed from the project. If parameters have been
added or removed, you will need to reverify the match between DesignXplorer and the
log file header row.

Importing the Log File into the Table of Design Points


1. In the Design of Experiments tab, verify that the Design of Experiments Type property is set to Custom.

2. Right-click on any cell in the Table of Design Points and select Import Design Points from the context
menu.

3. Browse to the user_files directory of the Project Directory and select the DesignPointLog.csv
file.

The design point data will be loaded into the Table of Design Points.

4. Update the DOE.

Failed Design Points


This section deals with failed design points. It includes recommendations on how to prevent design
point failures from occurring, instructions on preserving design point data for future troubleshooting,
and various methods that you can use to deal with design points once they have failed.

The following topics are discussed in this section:


Preventing Design Point Update Failures
Preserving Design Points and Files
Handling Failed Design Points

Preventing Design Point Update Failures


The most effective way to deal with design points that fail to update is to take a proactive approach,
preventing the failures from occurring in the first place. Common causes of failed design points include
licensing issues, IT issues (such as networking or hardware issues), a lack of robustness in the geo-
metry/mesh and solver, and parametric problems or conflicts. This section offers recommendations to
help you increase the probability of a successful design point update.

Minimize Potential Licensing and IT Issues


Before doing a design point update, you can minimize design point failures by addressing issues that
are not directly related to your project or its contents, but that could cause problems during the update.

Check your network connections.


Check your network connections to verify that everything is in order.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
200 of ANSYS, Inc. and its subsidiaries and affiliates.
Working with Design Points

Set Computer Power options to always on.


In your Computer Power options, configure your computer to remain turned on; set options for the
time after which to turn off the hard disk, sleep, and hibernate to Never. (This can avoid update issues
that may be caused by the computer timing out or shutting down during an update.)

Verify that licenses are available.


If youre using CAD, in the Workbench interface go to Tools > Options, select Geometry Import, and
set CAD Licensing to Hold (the default setting is Release). This places a hold on the license currently
being used, so that it will be available for the update.

Leave the modeling application open.


Dont close your modeling application while you are running DesignXplorer. For example, using the
Reader mode in CAD can create update issues because the editors are closed by default.

Restart ANSYS Mechanical or ANSYS Meshing.


By default, the ANSYS Mechanical and ANSYS Meshing applications will restart after each (one) design
point. This default value lengthens the overall processing time, but will improve overall system perform-
ance (memory and CPU) when the generation steps of each design point (geometry, mesh, solve, post
processing) are lengthy.

If overall system performance is not a concern, you can reduce the overall processing time by dir-
ecting the applications to restart less frequently or not at all. In the Workbench interface, go to
Tools > Options and select Mechanical or Meshing. Under Design Points:

To restart less frequently, set the number of design points to update before restarting the application
to a higher value, such as 10.

To prevent restarts completely, deselect the check box for During a design point update, periodically
restart the (Mechanical or Meshing) application.

Create a Robust Model


To create a robust model, review both the parameterization of your model (for instance, CAD or ANSYS
Design Modeler) and your DesignXplorer settings.

Keep only what is being used.


For example, only expose the parameters that you are actively optimizing. Turn off surfaces that arent
being used, coordinate system imports, etc.

Identify and correct potential parametric conflicts.


While design exploration provides some safeguards against parametric conflicts and poorly defined
problems, it cannot identify and eliminate all potential issues. Give careful consideration to your para-
metric setup, including factors such as ranges of variation, to make sure that the setup is reasonable
and well-defined.

Check your mesh.


If you have geometry parameters, double-check your mesh so as to avoid any simulation errors that
could be caused by meshing. Pay particular attention to local refinement and global quality. If you are
using ANSYS Mechanical or Fluent, use the Morphing option when possible.

Use the Box-Behnken DOE type.


Consider using the Box-Behnken Design of Experiments type if your project has parametric extremes
(for example, has extreme parameter values in corners that are difficult to build) and has 12 or fewer

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 201
Working with DesignXplorer

continuous input parameters. Since the Box-Behnken DOE doesnt have corners and does not combine
parametric extremes, it can reduce the risk of update failures.

Test the Robustness of the Model


Before you submit multiple (or all) design points for update, test the model and verify the robustness
of your design.

Verify model parameters by running a test project.


Design points often fail because the model parameters cannot be regenerated. To test the model para-
meters, try creating a project with a test load and coarse mesh that will run quickly. Solve the test project
to verify the validity of the parameters.

Verify model parameters by submitting a Geometry-Only update to Remote Solve Manager.


Starting in ANSYS version 14.0, you have the ability submit a Geometry-Only update for all design points
to DesignXplorer. In the Parameter Set Properties view, set Update Option to Submit to Remote Solve
Manager and set Pre-RSM Foreground Update to Geometry. When you submit the next update, the
geometry is updated first, so any geometry failures will be found sooner.

If necessary, reparameterize the model.


Occasionally, all of your parameters appear to be feasible, but the model still fails to update due to issues
in the history-based CAD tool. In this case, you can reparameterize the model in ANSYS Space Claim or
a similar direct modeling application.

Verify that the Current design point updates successfully.


Try to update the Current design point before submitting a full design point update. If the update fails,
you can open the project in an ANSYS application to further investigate that design point. See Handling
Failed Design Points (p. 203) for more information.

Attempt to update a few extreme design points.


Select a few design points with extreme values (for instance, the smallest gap with the largest radius)
and try to update them. In this way, you can assess the robustness of your design before committing
to a full correlation.

Preserving Design Points and Files


By default, when a design point other than the Current design point fails, DesignXplorer saves the
design point but does not retain any of its files; calculated data is saved on disk only for the Current
design point. Before performing a full update you may want to configure DesignXplorer to save design
points and related files. This ensures that after the update, youll have materials to further investigate
any update problems that might occur.

The preservation of design points and related files must be enabled first at the project level, and then
can be configured for individual components.

To enable this functionality at the project level, go to Tools > Options and select Design Exploration.
Under Design Points, select both the Preserve Design Points After DX Run and the Export Project for
Each Preserved Design Point check boxes, indicating that both design points and their files should be
saved. When you opt to preserve the design points, the design points will be available in the Parameter
Set. When you also opt to retain the files, each of the design points will be saved as a separate project
in the same directory as the original project. You can later open these projects and use them to investigate
any design points that failed to update.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
202 of ANSYS, Inc. and its subsidiaries and affiliates.
Working with Design Points

To configure this functionality at the component level, right-click on the component and select Edit. In
the Properties view under Design Points, select both the Preserve Design Points After DX Run and
Export Project for Each Preserved Design Point check boxes. (This is the same as selecting the Export
option for a parameter in the Parameter Set.)

Note

Preserved design points are still part of your original project, so the results of your investigations
on these design points in the separate project will also be updated to the project.

Since every design point is saved as a project, this approach can impact performance and disk
resources when used on projects with larger numbers (> 100) of design points.

Handling Failed Design Points


This section contains information and suggestions on investigating and handling design points once
theyve failed. The strategy you use depends on how many design points failed and the nature of the
failure.

Perform Preliminary Troubleshooting Steps


Review error messages.
The first step in gathering information about the update issue is to review error messages. In
DesignXplorer, a failed design point is considered to be completely failed. In reality, though, may be a
partial failure, where the design point failed to update for only a single output. As such, you need to
know which output is related to the failure.

You can find this information by hovering your mouse over a failed design point in the Table of
Design Points; a primary error message will tell you what output parameters have failed for the
design point and specify the name of the first failed component. Additional information may also
be available in the Messages view.

Reattempt to Update All design points.


Sometimes a design point update fails simply because of a short-term issue (such as a license or CPU
resources being temporarily unavailable), and the issue may actually be remedied in the time between
your first update attempt and your second one. Try again to update all design points before proceeding
further.

Before initiating a design point update, you can also set the Retry Failed Design Points option to
automatically retry the update for failed design points. This option is available for all DesignXplorer
components except for Six Sigma Analysis and a Parameters Correlation that is linked to a response
surface. When this option is selected, DesignXplorer will make additional attempts to solve all of
the design points that failed during the previous run. You can specify both the number of times
the update should be retried and the delay in seconds between each attempt.

To set this option on a global level, select Tools > Options in Workbench to open the Options dialog.
Open the Design Exploration item in the tree and select the Retry Failed Design Points check box.
Specify your preferences in the Number of Retries and Retry Delay fields that are displayed.

To set this option at the project level, open the tab for the component. In the Properties view, set
the Number of Retries property to the desired number (a value of 0 disables the option) and enter
a value for the Retry Delay property.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 203
Working with DesignXplorer

Address licensing and IT issues.


Error messages may indicate that the failure was caused by factors external to DesignXplorer, such as
network, licensing, or hardware issues. For example, the update may have failed because a license was
not available when it was needed, or because there was a problem with your network connectivity. If
the issue isnt remedied by your second attempt to update, address the issue in question and then retry
the update.

Create a Duplicate Test Project


When preparing to investigate one or more failed design points, keep in mind that editing the current
project during your investigation could invalidate other design points and your design exploration
results. This also applies to any separate design point projects created when you preserve design points
and files; these remain part of the original project, and can affect other design points and your design
exploration results.

To avoid potentially invalidating the original project, its recommended that you create a duplicate test
project via the Save As menu option. You can either duplicate the entire project, or to focus your ex-
amination on a particular failed design point, can duplicate one of the individual projects created for
preserved design points.

Investigate the Failed Design Points


Copy the Failed Design Point to Current.
By default, with a design point update, ANSYS Workbench retains calculated data for only the Current
design point. To investigate a failed design point, you can right-click on the point in the Parameter Set
Table of Design Points and select Copy Inputs to Current from the context menu. With the values of
the failed design point set to Current, you can then open the project in an editor to troubleshoot the
update issue. This approach works well when only a single design point has failed.

For more detailed information on copying design points to Current, see Activating and Exporting
Design Points.

Mark Failed Design Points for Export.


When several design points have failed, you can save the calculated data for these points by marking
them for export. In the Parameter Set Table of Design Points, select the Export check box for each of
the failed design points. With the next update, a separate project will be created for exported design
point. As with a project created by preserving design points and retaining their files, the new project
can be opened and used to further investigate reasons for the design points failure to update.

For more detailed information on exporting design points, see Activating and Exporting Design
Points.

Working with Sensitivities


The sensitivity plots available in design exploration allow you to understand how sensitive the output
parameters are to the input parameters. This understanding can help you innovate toward a more reliable
and better quality design, or to save money in the manufacturing process while maintaining the reliab-
ility and quality of your product. You can request a sensitivity plot for any output parameter in your
model.

The sensitivities available under the Six Sigma Analysis and the Goal Driven Optimization views are
statistical sensitivities. Statistical sensitivities are global sensitivities, whereas the single parameter
sensitivities available under the Responses view are local sensitivities.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
204 of ANSYS, Inc. and its subsidiaries and affiliates.
Working with Tables

The global, statistical sensitivities are based on a correlation analysis using the generated sample points,
which are located throughout the entire space of input parameters.

The local parameter sensitivities are based on the difference between the minimum and maximum
value obtained by varying one input parameter while holding all other input parameters constant. As
such, the values obtained for local parameter sensitivities depends on the values of the input parameters
that are held constant.

Global, statistical sensitivities do not depend on the values of the input parameters, because all possible
values for the input parameters are already taken into account when determining the sensitivities.

Global Sensitivity Chart Limitations


Sensitivity charts are not available if ALL input parameters are discrete.

Sensitivities are calculated for continuous parameters only.

Related Topics:

Using Local Sensitivity Charts (p. 112)


Using the Sensitivities Chart (GDO) (p. 175)
Statistical Sensitivities in a Six Sigma Analysis (p. 270)

Working with Tables


The Table view is used to display tabular data in one or several tables. Depending on the context, the
tables are read-only and filled automatically by ANSYS DesignXplorer, or they are partially or completely
editable.

The background color of a cell indicates if it is editable or not:

A gray background indicates a read-only cell

A white background indicates an editable cell

You can add new rows by entering values in the * row of the table. You enter values in the input
parameter columns. Once you have entered a value in one column in the * row, the row is added to
the table and the values for the remaining input parameters will be set to the initial values of the
parameters. You can then edit that row in the table and change any of the other input parameter values
if needed. Output parameter values will be calculated when the component is updated.

Output parameter values calculated from a simulation (a design point update) are displayed in black
text. Output parameter values calculated from a response surface are displayed in the custom color
specified in the Options dialog. For details, see Response Surface Options (p. 23).

Related Topics:
Viewing Design Points in the Table View
Editable Output Parameter Values
Copy/Paste
Exporting Table Data
Importing Data from a CSV File

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 205
Working with DesignXplorer

Viewing Design Points in the Table View


Points shown in the DesignXplorer component Table views (DOE points, refinement points, direct cor-
relation points, and candidate points, etc.) can correspond to design points in the Parameter Set Table
of Design Points (i.e., they share the same input parameter values). When a correspondence exists, the
points Name specifies the design point to which it is related.

If the source design point is deleted from the Parameter Set or the definition of either design point is
changed, the link between the two points is broken (without invalidating your model or results) and
the indicator is removed from the points name.

Editable Output Parameter Values


You can change the edition mode of the tables of design points, refinement points and verification
points (Design of Experiments and Response Surface components) in order to edit the output parameter
values. Although this is normally not necessary since the output values are provided by a real solve, it
is a useful alternative to insert existing results from external sources such as experimental data or known
designs.

The table can be in one of two modes: All Outputs Calculated, or All Outputs Editable. The default
mode for the table is All Outputs Calculated. To change table modes, right-click on the table and select
the desired mode. The mode behavior is as follows:

All Outputs Calculated allows you to enter values for any or all of the inputs in a row, and the outputs
for that row will be calculated when the DOE is updated. In this mode, individual rows in the table
can be overridden so that the outputs are editable. To override a row, right-click on it and select Set
Output Values as Editable.

All Outputs Editable allows you to enter values for any or all of the inputs and outputs in a row. In
this mode, individual rows in the table can be overridden so that the outputs are calculated. To
override a row, right-click on it and select Set Output Values as Calculated. If a row is changed from

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
206 of ANSYS, Inc. and its subsidiaries and affiliates.
Working with Tables

editable to calculated, it invalidates the DOE, requiring an Update. Rows with editable outputs are
not computed during an Update of the DOE.

If the table is in All Outputs Editable mode, you can enter values in the input or output parameter
columns. If you enter a value in an input column in the * row, the row is added to the table and the
values for the remaining input parameters will be set to the initial values of the parameters. The
output columns will be blank and you must enter values in all columns before updating the DOE. If
you enter a value in an output column in the * row, the row is added to the table and the values for
all input parameters will be set to the initial values of the parameters. The remaining output columns
will be blank and you must enter values in all columns before updating the DOE.

Note

The table can contain derived parameters. Derived parameters are always calculated by the
system, even if the table mode is All Output Values Editable.

Editing output values for a row changes the components state to UpdateRequired. The
component will need to be updated, even though no calculations are done.

If the points are solved and you change the table mode to All Output Values Editable and
then change it back to All Output Values Calculated without making any changes, the outputs
will be marked Out of Date and you must Update the component. The points will be recalcu-
lated.

Copy/Paste
You can copy/paste in the table (to/from a spreadsheet, text file, or design point table, for example).
The copy/paste adapts to the current state of the table, pasting only inputs or inputs and outputs if
outputs are editable.

To manipulate large number of rows, it is recommended to use the export and import capabilities de-
scribed in the following sections.

Exporting Table Data


Each table can be exported as a CSV file. Right-click on a cell and select the Export Data menu entry,
enter a filename and export. The contents of the complete table is exported to the selected file.

For details, see Extended CSV File Format (p. 281).

Importing Data from a CSV File


Importing data from an external file is, for instance, a way to setup a specific Design of Experiments
scheme from an external tool or to use existing results as verification points. The imported data can
contain input parameter values and, optionally, output parameter values as well.

The import operation is available in the contextual menu when you right-click in a table, on an outline
node or a Design Exploration component which implements the import feature. For instance, you can
right-click on the Table of Design Points in a Design of Experiments tab and select Import Design
Points, or you can right-click on a Response Surface component from the Project Schematic view and
select Import Verification Points.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 207
Working with DesignXplorer

As an example, to import design point values from an external CSV file to a Design of Experiments
component, you need to follow these steps:

1. From the Project Schematic, right-click on the Design of Experiments cell where you want to import
the design points values and select the Import Design Points menu option.

2. Select an existing CSV file and click Open.

3. A question box asks if you want the input parameter variation ranges to be extended if required.
Answer yes to change the upper and lower bounds of the parameters as required, otherwise the
values falling outside of the defined ranges will be ignored by the import operation. You may also
cancel the import action.

The CSV file is parsed and validated first. If the format is not valid or the described parameters are not
consistent with the current project, a detailed list of the errors is displayed and the import operation
is aborted. Then, if the file is validated, the data import is performed.

The main rules driving the file import are the following:

The file must conform to the extended CSV file format as defined in Extended CSV File Format. In
particular, a header line where each parameter is identified by its ID (P1, P2, , Pn) is mandatory to
describe each column.

The order of the parameters in the file may differ from the order of the parameters in the project.

Disabled input parameters must not be included.

Values must be provided in the units As Defined (menu Units > Display Values As Defined).

If output parameter values are provided, they must be provided for all of the output parameters but
the derived output parameters. If values are provided for derived output parameters, they will be
ignored.

Even if the header line states that output parameter values are provided, it is possible to omit them
on a data line.

Working with Remote Solve Manager and DesignXplorer


The Remote Solve Manager (RSM) is a job queuing and resource management system that distributes
tasks to available local and remote computing resources. When you submit a job to RSM, it can either
be queued to run in the background of the local machine or to one or more remote machines. For
more information on RSM, see the Remote Solve Manager User's Guide.

In order to send updates via RSM, you must first install and configure RSM. For more information, see
the installation tutorials posted on the Downloads page of the ANSYS Customer Portal. For further in-
formation about tutorials and documentation on the ANSYS Customer Portal, go to http://support.an-
sys.com/docinfo.

Submitting Design Point Updates to RSM from DesignXplorer


You can configure DesignXplorer to send design point updates to RSM. This configuration can be dif-
ferent than the configuration for Solution cell updates, so it is possible to send design point updates
and Solution cell updates to different locations for processing. For example, design point updates might
be submitted to RSM for remote processing while the Solution cell update is run in the background

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
208 of ANSYS, Inc. and its subsidiaries and affiliates.
Working with Remote Solve Manager and DesignXplorer

of the local machine, or the Solution cell update might be submitted to RSM while the design point
updates are run in the foreground of the local machine.

It is possible to change the location of individual design point updates at any time, so you can send
successive design point updates to different resources within the same session. For example, if one
update is currently running in the foreground of your local machine, you can send a different update
to RSM for remote processing.

For more information on submitting design points for remote update, see Updating Design Points via
Remote Solve Manager (RSM) in the ANSYS Workbench Customization Toolkit.

For more information on configuring the Solution cell update location, see Submitting Solutions for
Local, Background, and Remote Solve Manager Processes in the Workbench User's Guide.

For more information on configuring design point update locations, see Design Point Update Loca-
tion (p. 195) in the Design Exploration User's Guide.

Previewing Updates
When a Design of Experiments, Response Surface, or Parameters Correlation system cell requires
an update, in some cases you can preview the results of the update. A preview prepares the data and
displays it without updating the design points. This allows you to experiment with different settings
and options before actually solving a DOE matrix or generating a Response Surface or list of Parameters
Correlation samples.

On the Project Schematic, you can either right-click the cell to be updated and select Preview from the
context menu, or select the cell and click the enabled Preview button in the toolbar.

In the tab for the cell (accessed by right-clicking the cell and selecting Edit), you can either right-click on
the root node in the Outline view and select Preview or select the node and click the Preview button
in the toolbar.

Pending State
Once a remote update for a Design of Experiments, Response Surface, or Parameters Correlation
cell has begun, the cell enters a Pending state that allows some degree of interaction with the project.
While the cell is in a Pending state, you can:

Access the Tools > Options dialog, access the Parameter Set Properties view, or archive the project.

Follow the overall process of an update in the Progress view by clicking on the Show Progress button
at the bottom right corner of the window.

Interrupt, abort, or cancel the update by clicking the red x icon in the corner of the Progress view and
then clicking the associated button.

Follow the process of individual design point updates in the Table view by right-clicking the cell being
updated and selecting Edit. Each time a design point is updated, results and status are displayed in the
Table view

Exit the project and either create a new project or open another existing project. If you exit the project
while it is in a Pending state due to a remote design point update, you can later reopen it and click the
Resume button in the toolbar to resume the update.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 209
Working with DesignXplorer

You can exit a project while a design point update or a Solution cell via RSM is still in progress. For
information on behavior when exiting during an update, Updating Design Points via Remote Solve
Manager (RSM) or Exiting a Project during an RSM Solution Cell Update in the Workbench User's
Guide.

Note

For iterative update processes such as Kriging refinement, Sparse Grid Response Surface
updates, and Correlation with Auto-Stop, design point updates are sent to RSM iteratively;
the calculations of each iteration are based on the convergence results of the previous it-
eration. As a result, if you exit Workbench during the Pending state for an iterative process,
when you reopen the project, only the current iteration will have completed.

For more detailed information on the Pending state, see Updating Design Points via RSM or Project
Schematic in the Workbench User's Guide.

Note

The Pending state is not available for updates that contain verification points or candidate
points. You can still submit these updates to RSM, but you will not receive intermediate
results during the update process and you cannot exit the project. Once the update is sub-
mitted via RSM, you must wait for the update to complete and results to be returned from
the remote server.

Working with Design Exploration Project Reports


The DesignXplorer project report enables you to generate a report containing the results of your design
exploration project, providing a visual snapshot of your Response Surface, Goal Driven Optimization,
Parameters Correlation, or Six Sigma Analysis study. The contents and organization of the report reflect
the layout of the Project Schematic and the current state of the project.

To generate a report for a project, select the File > Export Report menu option. Once you have gener-
ated a project report, you can then save it and edit its contents as needed.

The project report for each type of design exploration analysis system contains a summary; separate
sections for global-level, system-level, component-level project information; and appendices.

Project Summary The project summary includes the project name, date and time created, and product
version.

Global Project Information The section containing global project information includes a graphic of
the Project Schematic and tables corresponding to the Files, Outline of All Parameters, Design
Points, and Properties views.

System Information The Project Report contains a section for each of the analysis systems in the
project. For example, if your project contains both a Goal Driven Optimization system and a Six Sigma
Analysis, the report will include a section for each of them.

Component Information For each analysis system section, the project report contains subsections
that correspond to the cells in that type of system. For example, a Goal Driven Optimization system
includes a Design of Experiments, a Response Surface, and an Optimization component, and so will

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
210 of ANSYS, Inc. and its subsidiaries and affiliates.
Working with Design Exploration Project Reports

have a subsections that correspond to each; a Parameters Correlation system includes only a Correlation
component, so the Parameters Correlation section will include only a Correlation subsection. Subsec-
tions contain project data such as parameter or model properties and charts.

Appendices The appendices contain matrices, tables, and other types of information related to the
project.

For more information on working with Project Reports, see Working with Project Reports in the Work-
bench User's Guide.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 211
Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
212 of ANSYS, Inc. and its subsidiaries and affiliates.
DesignXplorer Theory
When performing a design exploration, a theoretical understanding of the methods available is beneficial.
The underlying theory of the methods is categorized as follows:

Understanding Response Surfaces (p. 213): Use to build a continuous function of output versus
input parameters.
Understanding Goal Driven Optimization (p. 224): Use to create an optimization based on re-
sponse surface evaluations (Response Surface Optimization) or real solves (Direct Optimization).
Understanding Six Sigma Analysis (p. 257): Use to run a statistical analysis to quantify reliabil-
ity/quality of a design.

Understanding Response Surfaces


In a process of engineering design, it is very important to understand what and/or how many input
variables are contributing factors to the output variables of interest. It is a lengthy process before a
conclusion can be made as to which input variables play a role in influencing, and how, the output
variables. Designed experiments help revolutionize the lengthy process of costly and time-consuming
trial-and-error search to a powerful and cost-effective (in terms of computational time) statistical
method.

A very simple designed experiment is screening design. In this design, a permutation of lower and upper
limits (two levels) of each input variable (factor) is considered to study their effect to the output variable
of interest. While this design is simple and popular in industrial experimentations, it only provides a
linear effect, if any, between the input variables and output variables. Furthermore, effect of interaction
of any two input variables, if any, to the output variables is not characterizable.

To compensate insufficiency of the screening design, it is enhanced to include center point of each input
variable in experimentations. The center point of each input variable allows a quadratic effect, minimum
or maximum inside explored space, between input variables and output variables to be identifiable, if
one exists. The enhancement is commonly known as response surface design to provide quadratic re-
sponse model of responses. The quadratic response model can be calibrated using full factorial design
(all combinations of each level of input variable) with three or more levels. However, the full factorial
designs generally require more samples than necessary to accurately estimate model parameters. In
light of the deficiency, a statistical procedure is developed to devise much more efficient experiment
designs using three or five levels of each factor but not all combinations of levels, known as fractional
factorial designs. Among these fractional factorial designs, the two most popular response surface
designs are Central Composite Designs (CCDs) and Box-Behnken designs (BBMs).

Design of Experiments types are:

Central Composite Design (p. 214)


Box-Behnken Design (p. 215)

Response Surfaces are created using:

Standard Response Surface - Full 2nd-Order Polynomial algorithms (p. 216)


Kriging Algorithms (p. 217)

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 213
DesignXplorer Theory

Non-Parametric Regression Algorithms (p. 218)


Sparse Grid Algorithms (p. 221)

Central Composite Design


Central Composite Designs, also known as Box-Wilson Designs, are a five-level fractional factorial design
that is suitable for calibrating the quadratic response model. There are three types of CCDs that are
commonly used in experiment designs: circumscribed, inscribed, and face-centered CCDs. The five-level
coded values of each factor are represented by [-, -1,0+1,+], where [-1,+1] corresponds to the phys-
ical lower and upper limit of the explored factor space. It is obvious that [-,+] establishes new extreme
physical lower and upper limits for all factors. The value of varies depending on design property and
number of factors in the study. For the circumscribed CCDs, considered to be the original form of
Central Composite Designs, the value of is greater than 1. The following is a geometrical representations
of a circumscribed CCD of three factors:

Example 1: Circumscribed

Inscribed CCDs, on the contrary, are designed using [-1,+1] as true physical lower and upper limits
for experiments. The five-level coded values of inscribed CCDs are evaluated by scaling down CCDs by
the value of evaluated from circumscribed CCDs. For the inscribed CCDs, the five-level coded values
are labeled as [-1,-1/,0,+1/,+1]. The following is a geometrical representation of an inscribed CCD of
three factors:

circumscribed

Example 2: Inscribed

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
214 of ANSYS, Inc. and its subsidiaries and affiliates.
Understanding Response Surfaces

Face-centered CCDs are a special case of Central Composite Designs, in which =1. As a result, the face-
centered CCDs become a three-level design that is located at the center of each face formed by any
two factors. The following is a geometrical representation of a face-centered CCD of three factors:

Example 3: Face-Centered

Box-Behnken Design
Unlike the Central Composite Design, the Box-Behnken Design is quadratic and does not contain em-
bedded factorial or fractional factorial design. As a result, Box-Behnken Design has a limited capability
of orthogonal blocking, compared to Central Composite Design. The main difference of Box-Behnken
Design from Central Composite Design is that Box-Behnken is a three-level quadratic design, in which
the explored space of factors is represented by [-1,0,+1]. The true physical lower and upper limits
corresponding to [-1,+1]. In this design, however, the sample combinations are treated such that they
are located at midpoints of edges formed by any two factors. The following is a geometry representation
of Box-Behnken designs of three factors:

Example 4: Box-Behnken Designs of Three Factors

The circumscribed CCD generally provides a high quality of response prediction. However, the circum-
scribed CCD requires factor level setting outside the physical range. In some industrial studies, due to
physical and/or economic constraints, the use of corner points (where all factors are at an extreme) is
prohibited. In this case, if possible, factor spacing/range can be planned out in advance to ensure [-
,+] for each coded factor falling within feasible/reasonable levels.

Unlike the circumscribed CCD, the inscribed CCD uses only points within factor levels originally specified.
In other words, the inscribed CCD does not have the corner point constraint issue as in the circum-

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 215
DesignXplorer Theory

scribed CCD. The improvement, however, compromises the accuracy of response prediction near the
factor limits (i.e., the lower and upper limits) of each factor. As a result, the inscribed CCD does not
provide the same high quality of response prediction compared to the circumscribed CCD. The inscribed
CCD provides a good response prediction over the central subset of factor space.

The face-centered CCD, like the inscribed CCD, does not require points outside the original factor range.
The face-centered CCD, however, provides relatively high quality of response prediction over the entire
explored factor space in comparison to the inscribed CCD. The drawback of the face-centered CCD is
that it gives a poor estimate of the pure quadratic coefficient, i.e., the coefficient of square term of a
factor.

For relatively the same accuracy, Box-Behnken Design is more efficient compared to Central Composite
Design in cases involving three or four factors, as it requires fewer treatments of factor level combinations.
However, like the inscribed CCD, the prediction at the extremes (corner points) is poor. The property
of missing corners may be useful if these should be avoided due to physical and/or economic con-
straints, because potential of data loss in those cases can be prevented.

Standard Response Surface - Full 2nd-Order Polynomial algorithms


The meta-modeling algorithms are categorized as:

General Definitions (p. 216)


Linear Regression Analysis (p. 217)

General Definitions
The error sum of squares SSE is:
n

= i
^

i
2
= ^ T
^
(2)
i =1

where:

yi = value of the output parameter at the ith sampling point

= value of the regression model at the ith sampling point

The regression sum of squares SSR is:




= 



(3)
 =

where:

= 

 =

The total sum of squares SST is:



(4)

=

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
216 of ANSYS, Inc. and its subsidiaries and affiliates.
Understanding Response Surfaces

For linear regression analysis the relationship between these sums of squares is:
= + (5)

Linear Regression Analysis


For a linear regression analysis, the regression model at any sampled location {x}i, with i = 1, ... , n in
the m-dimensional space of the input variables can be written as:
i
= i
+ (6)

where:


t
= row vector of regression terms of the response surface model at the ith sampled location

{c} =
c c ...c
= vector of the regression parameters of the regression model
1 2 p

p = total number of regression parameters. For linear regression analysis, the number of regres-
sion parameters is identical to the number of regression terms.

The regression coefficients {c} can be calculated from:



=   
(7)

Kriging Algorithms
Kriging postulates a combination of a polynomial model plus departures of the form given by:
= + (8)

where y(x) is the unknown function of interest, f(x) is a polynomial function of x, and Z(x) is the realization
of a normally distributed Gaussian random process with mean zero, variance 2, and non-zero covariance.
The f(x) term in is similar to the polynomial model in a response surface and provides a global model
of the design space.

While f(x) globally approximates the design space, Z(x) creates localized deviations so that the Kriging
model interpolates the N sample data points. The covariance matrix of Z(x) is given by:
 j
=   j
(9)

In R is the correlation matrix, and r(xi, xj) is the spatial correlation of the function between any two of
the N sample points xi and xj. R is an N*N symmetric, positive definite matrix with ones along the diag-
onal. The correlation function r(xi, xj) is Gaussian correlation function:
M


= 

(10)
=
  k k k

k 

The k in are the unknown parameters used to fit the model, M is the number of design variables, and


and are the kth components of sample points xi and xj. In some cases, using a single correlation

parameter gives sufficiently good results; you can specify the use of a single correlation parameter, or

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 217
DesignXplorer Theory

one correlation parameter for each design variable (Tools>Options>Design Exploration>Response Sur-
face>Kriging Options>Kernel Variation Type: Variable or Constant).

Z(x) can be written :


N

= i
i
(11)
i =1

Non-Parametric Regression Algorithms


Let the input sample (as generated from a DOE method) be X = {x1 , x2, x3, , xM}, where each xi is an
N-dimensional vector and represents an input variable. The objective is to determine the equation of
the form:
=< >+ (12)

Where, W is a weighting vector. In case of generic non-parametric case, the Equation 12 (p. 218) is rewrit-
ten as the following:
 r r
= + *
(13)
=

r r 

Where, is the kernel map and the quantities Ai and


 
are Lagrange multipliers whose deriv-
ation will be shown in later sections.

In order to determine the Lagrange multipliers, we start with the assumption that the weight vector W
must be minimized such that all (or most) of the sample points lie within an error zone around the fitted
surface. For a simple demonstration of this concept, please see Figure 4: Fitting a regression line for a
group of sample points with a tolerance of which is characterized by slack variables * and (p. 219).

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
218 of ANSYS, Inc. and its subsidiaries and affiliates.
Understanding Response Surfaces

Figure 4: Fitting a regression line for a group of sample points with a tolerance of which is
characterized by slack variables * and

Thus, we write a primal optimization formulation as the following:


N

= + +
2 *
Minimize L

=1
< >+ + (14)
S.T.

< >+ + *

Where, C is an arbitrary (> 0) constant. In order to characterize the tolerance properly, we define a loss
function in the interval (-, ) which de facto becomes the residual of the solution. In the present imple-
mentation, we use the -insensitive loss function which is given by the equation:
= <
(15)
=

The primal problem in Equation 14 (p. 219) can thus be rewritten as the following using generalized loss
functions as:


= + +
 
 
 =


(16)





< 
>+ 
+ 


< 
> + 


Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 219
DesignXplorer Theory

In order to solve this efficiently, a Lagrangian dual formulation is done on this to yield the following
expression:
N N
2
= *
+ + i
*
i
+ i
< i
>+ i
+ i
i =1 i =1
N

+ i
*
i
< i
> + i* (17)
i =1
N

+ ii + i* i*
i =1

After some simplification, the dual Lagrangian can be written as the following Equation 18 (p. 220):
 

m n L = 


j
j
<  j
>
 = j =


+ 






 =

  
+   +




 =  = 
S.T.
(18)



()



=
 =
()
()



()


()
() =

()

()


is a quadratic constrained optimization problem, and the design variables are the vector A. Once this
is computed, the constant b in can be obtained by the application of Karush-Kuhn-Tucker (KKT) condi-
tions. Once the -insensitive loss functions are used instead of the generic loss functions l() in , this
can be rewritten in a much simpler form as the following:
 

=





  
A, A
 =
 =

+ 

+ 



 =

(19)










=
 =



Which is be solved by a QP optimizer to yield the Lagrange multipliers 


and the constant b is ob-
tained by applying the KKT conditions.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
220 of ANSYS, Inc. and its subsidiaries and affiliates.
Understanding Response Surfaces

Sparse Grid Algorithms


The Sparse Grid metamodeling is a hierarchical Sparse Grid interpolation algorithm based on piecewise
multilinear basis functions.

The first ingredient of a Sparse Grid method is a one-dimensional multilevel basis.

Piecewise linear hierarchical basis (from the level 0 to the level 3):

The calculation of coefficients values associated to a piecewise linear basis is hierarchical and obtained
by the differences between the values of the objective function and the evaluation of the current Sparse
Grid interpolation.

Example 5: Interpolation with the Hierarchical Basis

For a multidimensional problem, the Sparse Grid metamodeling is based on piecewise multilinear basis
functions which are obtained by a sparse tensor product construction of one-dimensional multilevel
basis.

Let denote a multi index, where d is the problem dimension (number of input parameters)
and li corresponds to the level in the i-th direction.

The generation of the Sparse Grid W1 is obtained by the following tensor

product:

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 221
DesignXplorer Theory

Example 6: Tensor Product Approach to Generate the Piecewise Bilinear Basis Functions W2,0

Tensor product of linear basis functions for a two-dimensional prob-

lem:

To generate a new Sparse Grid Wl', any Sparse Grid W1 that meets the order relation Wl < Wl' needs to
be generated before:

means:

with
and

For example, in two-dimensional problem, the generation of the grid W2,1 required several steps:

1. 1. Generation of W0,0

2. 2. Generation of W0,1 and W1,0

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
222 of ANSYS, Inc. and its subsidiaries and affiliates.
Understanding Response Surfaces

3. 3. Generation of W2,0 and W1,1

The calculation of coefficients values associated to a piecewise multilinear basis is similar to the calcu-
lation of coefficients of linear basis: the coefficients are obtained by the differences between the values
of the objective function on the new grid and the evaluation (of the same grid) with the current Sparse
Grid interpolation (based on old grids).

We can observe for higher-dimensional problem that not all input variables carry equal weight. A regular
Sparse Grid refinement can leads to too many support nodes: that is why the Sparse Grid metamodeling
uses a dimension-adaptive algorithm to automatically detect separability and which dimensions are the
more or the less important ones to reduce computational effort for the objectives functions.

The hierarchical structure is used to obtain an estimate of the current approximation error. This current
approximation error is used to choose the relevant direction to refine the Sparse Grids. In other words,
if the approximation error has been found with the Sparse Grid Wl the next iteration consists in the
generation of new Sparse Grids obtained by incrementing of each dimension level of Wl (one by one)
as far as possible: the refinement of W2,1 can generate 2 new Sparse Grids W3,1 and W2,2 (if the W3,0
and W1,2 already exist)

The Sparse Grid metamodeling stops automatically when the desired accuracy is reached or when the
maximum depth is met in all directions (the maximum depth corresponds to the maximum number of
hierarchical interpolation levels to compute : if the maximum depth is reached in one direction, the
direction is not refined further).

The new generation of the Sparse Grid allows as many linear basis functions as there are points of dis-
cretization.

All Sparse Grids generated by the tensor product contain only one point that allows refinement more
locally.

Sparse grid metamodeling is more efficient with a more local refinement process that uses less design
points, as well as, reaching the requested accuracy faster.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 223
DesignXplorer Theory

Sparse Grid related topics:

Sparse Grid (p. 84)

Understanding Goal Driven Optimization


Goal Driven Optimization (GDO) is a set of constrained, multi-objective optimization techniques in which
the "best" possible designs are obtained from a sample set given the objectives you set for parameters.
The available optimization methods are:

Screening

MOGA

NLPQL

MISQP

Adaptive Single-Objective

Adaptive Multiple-Objective

The Screening, MISQP, and MOGA optimization methods can be used with discrete parameters. The
Screening, MISQP, MOGA, Adaptive Multiple-Objective, and Adaptive Single-Objective optimization
methods can be used with continuous parameters with Manufacturable Values.

The GDO process allows you to determine the effect on input parameters with certain objectives applied
for the output parameters. For example, in a structural engineering design problem, you may want to
determine combination of design parameters best satisfy minimum mass, maximum natural frequency,
maximum buckling and shear strengths, and minimum cost, with maximum value constraints on the
von Mises stress and maximum displacement.

This section describes GDO and its use in performing single- and multiple-objective optimization.
Principles (GDO)
Guidelines and Best Practices (GDO)
Goal Driven Optimization Theory

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
224 of ANSYS, Inc. and its subsidiaries and affiliates.
Understanding Goal Driven Optimization

Principles (GDO)
You can apply Goal Driven Optimization to design optimization by using any of the following methods:
Screening, MOGA, NLPQL, MISQP, Adaptive Single-Objective, or Adaptive Multiple-Objective.

The Screening approach is a non-iterative direct sampling method by a quasi-random number gen-
erator based on the Hammersley algorithm.

The MOGA approach is an iterative Multi-Objective Genetic Algorithm, which can optimize problems
with continuous input parameters.

The NLPQL approach is a gradient-based, single-objective optimizer which is based on quasi-Newton


methods.

The MISQP approach is a gradient-based, single-objective optimizer that solves mixed-integer non-
linear programming problems by a modified sequential quadratic programming (SQP) method.

The Adaptive Single-Objective approach is a gradient-based, single-objective optimizer that employs


an OSF Design of Experiments, a Kriging response surface, and the MISQP optimization algorithm.

The Adaptive Multiple-Objective approach is an iterative, multi-objective optimizer that employs


a Kriging response surface and the MOGA algorithm. In this method, the use of Kriging allows for a
more rapid optimization process because:

Except when necessary, it does not evaluate all design points.

Part of the population is simulated by evaluations of the Kriging, which is constructed of all the
design points submitted by the MOGA algorithm.

MOGA is better for calculating the global optima, while NLPQL and MISQP are gradient-based algorithms
ideally suited for local optimization. So you can start with Screening or MOGA to locate the multiple
tentative optima and then refine with NLPQL or MISQP to zoom in on the individual local maximum or
minimum value.

The GDO framework uses a Decision Support Process (DSP) based on satisfying criteria as applied to
the parameter attributes using a weighted aggregate method. In effect, the DSP can be viewed as a
postprocessing action on the Pareto fronts as generated from the results of the various optimization
methods.

Usually the Screening approach is used for preliminary design, which may lead you to apply one of
the other approaches for more refined optimization results. Note that running a new optimization causes
a new sample set to be generated.

In either approach, the Tradeoff chart, as applied to the resulting sample set, shows the Pareto-dominant
solutions. However, in the MOGA approach, the Pareto fronts are better articulated and most of the
feasible solutions lie on the first front, as opposed to the usual results of the Screening approach where
the solutions are distributed across all the Pareto fronts. This is illustrated in the following two figures.
Figure 5: 6,000 Sample Points Generated by Screening Method (p. 226) shows the sample as generated
by the Screening method, and Figure 6: Final Sample Set After 5,400 Evaluations by MOGA Method (p. 226)
shows a sample set generated by the MOGA method for the same problem.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 225
DesignXplorer Theory

Figure 5: 6,000 Sample Points Generated by Screening Method

Figure 6: Final Sample Set After 5,400 Evaluations by MOGA Method

Figure 7: Pareto Optimal Front Showing Two Non-Dominated Solutions (p. 227) shows the necessity of
generating Pareto fronts. The first Pareto front or Pareto frontier is the list of non-dominated points
for the optimization.

A dominated point is a point that, when considered in regard to another point, is not the best solution
for any of the optimization objectives. For example, if point A and point B are both defined, point B is
a dominated point when point A is the better solution for all objectives.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
226 of ANSYS, Inc. and its subsidiaries and affiliates.
Understanding Goal Driven Optimization

In the example below, the two axes represent two output parameters with conflicting objectives: the
X axis represents Minimize P6 (WB_V) and the Y axis represents Maximize P9 (WB_BUCK). The chart
shows two optimal solutions, point 1 and point 2. These solutions are non-dominated, which means
that both points are equally good in terms of Pareto optimality, but for different objectives. In this ex-
ample, point 1 is the better solution for Minimize P6 (WB_V) and point 2 is the better solution for
Maximize P9 (WB_BUCK). Neither point is strictly dominated by any other point, so are both included
on the first Pareto front.

Figure 7: Pareto Optimal Front Showing Two Non-Dominated Solutions

Guidelines and Best Practices (GDO)


Typically, the use of NLPQL or MISQP is suggested for continuous problems when there is only one
objective function. The problem may or may not be constrained and must be analytic (i.e. must be
defined only by continuous input parameters, and the objective functions and constraints should not
exhibit sudden 'jumps' in their domain.) The main difference from MOGA lies in the fact that MOGA is
designed to work with multiple objectives and does not require full continuity of the output parameters.
However, for continuous single objective problems, the use of NLPQL or MISQP gives greater accuracy
of the solution as gradient information and line search methods are used in the optimization iterations.
MOGA is a global optimizer designed to avoid local optima traps, while NLPQL and MISQP are local
optimizers designed for accuracy.

The default convergence rate for the NLPQL and MISQP algorithms is set to 1.0E-06. This is computed
based on the (normalized) Karush-Kuhn-Tucker (KKT) condition. This implies that the fastest convergence
rate of the derivatives or the functions (objective function and constraint) determine the termination
of the algorithm. The advantage of this approach is that for large problems, it is possible to get a near-
optimal feasible solution quickly without being trapped into a series of iterations involving small solution
steps near the optima. However, if the user prefers a more numerically accurate solution which may
involve these iterations, then the convergence rate can be set to as low as 1.0E-12. The range of (1.0E-
06 - 1.0E-12) covers a vast majority of all optimization problems and the choice of the convergence rate
is based on the user's preference.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 227
DesignXplorer Theory

Screening is the only optimization method that does not require at least one objective. To use any of
the other optimization methods, you must have an Objective defined for at least one of the output
parameters (only one output can have an objective for the NLPQL, MISQP, and Adaptive Single-Objective
methods).

If this is not done, then the optimization problem is either undefined (Objective Type for each para-
meter set to No Objective) or is merely a constraint satisfaction problem (Constraint Type set to a
value other than No Constraint) When the problem is not defined, as in Figure 8: Case Where the Op-
timization Cannot be Run (p. 228), the MOGA, NLPQL, MISQP, Adaptive Single-Objective, or Adaptive
Multiple-Objective analysis cannot be run.

If the problem defined is only one of constraint satisfaction, as in Figure 9: Case Where GDO Solves a
Constraint Satisfaction Problem (p. 229), the single Pareto fronts displayed (from the Tradeoff chart)
represents the feasible points that meet the constraints. The unfeasible points that do not satisfy the
constraints can also be displayed on the chart. This is a good way to demonstrate the feasibility
boundaries of the problem in the output space (i.e., to determine the boundaries of the design envelope
that reflects your preferences). unfeasible points can also be displayed on any chart where the optimiz-
ation contains at least one constraint. As shown in Figure 8: Case Where the Optimization Cannot be
Run (p. 228), if the entire Objective Type column is set to No Objective, the optimization process cannot
be run, as no outputs have objectives specified which could drive the algorithm. The fact that user input
is required before the algorithm can be run is reflected both by the state icon and the Quick Help of
the Optimization cell on the Project Schematic view and of the main node of the Outline in the Op-
timization tab. If you try to update the optimization with no objectives set, you will also get an error
message in the Message view.

Figure 8: Case Where the Optimization Cannot be Run

As shown in Figure 9: Case Where GDO Solves a Constraint Satisfaction Problem (p. 229), some combin-
ations of Constraint Type, Lower Bound, and possibly Upper Bound settings specify that the problem
is one of constraint satisfaction. The results obtained from these settings indicate only the boundaries
of the feasible region in the design space. Currently only the Screening option can be used to solve a
pure constraint satisfaction problem.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
228 of ANSYS, Inc. and its subsidiaries and affiliates.
Understanding Goal Driven Optimization

Figure 9: Case Where GDO Solves a Constraint Satisfaction Problem

The Objective settings of the input parameters do not affect the sample generation of the MOGA,
NLPQL, MISQP, Adaptive Single-Objective, or Adaptive Multiple-Objective methods. However, this
setting will affect the ranking of the samples which generate the candidate designs, if the Optimization
is updated.

In a Screening optimization, sample generation is driven by the domain definition (i.e., the lower and
upper bounds for the parameter) and is not affected by parameter objective and constraint settings.
The parameter objective and constraint settings, however, do affect the generation of candidate points.

When a constraint is defined, it is treated as a constraint if Constraint Handling in the Optimization


Properties view is set to Strict. If Constraint Handling is set to Relaxed, those constraints are actually
treated as objectives. If only constraints are defined, Screening is your only optimization option. If at
least one objective is defined, the other optimization methods are also available.

Typically, the Screening method is best suited for conducting a preliminary design study, because it is
a low-resolution, fast, and exhaustive study that can be useful in locating approximate solutions quickly.
Since the Screening method does not depend on any parameter objectives, you can change the object-
ives after performing the Screening analysis to view the candidates that meet different objective sets,
allowing you to quickly perform preliminary design studies. It's easy to keep changing the objectives
and constraints to view the different corresponding candidates, which are drawn from the original
sample set.

For example, you may run the Screening process for 3,000 samples, then use the Tradeoff or Samples
charts to view the Pareto fronts. The solutions slider can be used in the chart Properties to display only
the prominent points you may be interested in (usually the first few fronts). When you do a MOGA or
Adaptive Multiple-Objective optimization, you can limit the number of Pareto fronts that are computed
in the analysis.

Once all of your objectives are defined, click Update Optimization to generate up to the requested
number of candidate points. You may save any of the candidates by right-clicking and selecting Explore
Response Surface at Point, Insert as Design Point(s), or Insert as Refinement Point(s).

When working with a Response Surface Optimization system, you should validate the best obtained
candidate results by saving the corresponding design points, solving them at the Project level, and
comparing the results.

Goal Driven Optimization Theory


In this section, well discuss theoretical aspects of Goal Driven Optimization and the different optimization
methods available in DesignXplorer.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 229
DesignXplorer Theory

Sampling for Constrained Design Spaces


Shifted Hammersley Sampling (Screening)
Single-Objective Optimization Methods
Multiple-Objective Optimization Methods
Decision Support Process

Sampling for Constrained Design Spaces


Common samplings created in a constrained design space might produce unfeasible sample points if
the underlying formulation disregards the constraints. DesignXplorer provides a sampling type that
takes into account the constraints defined for input parameters.

DesignXplorer generates sampling:

S(X Y)

Subject to:

g ( ) j= m
RNc
 NNi
L U
L  U

The symbols X and Y denote the vectors of the continuous and the integer variables, respectively.
DesignXplorer allows you to define a constrained design space with linear or non-linear constraints.

Examples of linear constraints:

x1 + x2
x1 x2

Examples of non-linear constraints:

 + 


The constraint sampling is a heuristic method based on Shifted-Hammersley (Screening) and MISQP
sampling methods.

For a given screening of N sample points generated in the hypercube of input parameters, only a part
of this sampling (M) is within the constrained design space (i.e. the feasible domain).

To obtain a constraint sampling that is close to a uniform sampling of Nf points, DesignXplorer solves
this problem first:


(P Card ( E ) f P) E = {  }

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
230 of ANSYS, Inc. and its subsidiaries and affiliates.
Understanding Goal Driven Optimization

If
( ) the sampling is completed.

If
( )> , only the Nf furthermost sample points are kept.

Otherwise, DesignXplorer needs to create additional points to reach the Nf sample points.

The Screening is not guaranteed to find either enough sample points or at least one feasible point. For
this reason, DesignXplorer solves an MISQP problem for each constraint on input parameters, as follows:

Cj = g (X Y)
, j

Subject to:

 ( ) = m

In the best case, we obtain m feasible points.

If A is the center of mass of the feasible points, then:

A= 
 =1

DesignXplorer then solves the new MISQP problem:

= P P2

Subject to:

 ( ) = 

Once the point closest to the center mass of the feasible points has been found, DesignXplorer projects
part of the unfeasible points onto the skin of the feasible domain.

Given an unfeasible point B and feasible point G, we can build a new feasible point C:
uuur
 = G + GB

To find C close to the skin of the feasible domain, the algorithm starts with equals to 1, and decreases
value until C is feasible.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 231
DesignXplorer Theory

To optimize the distribution of point on the skin of the feasible domain, the algorithm chooses the
furthermost unfeasible points in terms of angles:

If B is the first unfeasible point processed, the next one is:


uuur uuur
GB G
uuuuur uuuuuuur
X P GB P2 P G P2

Once enough points have been generated on the skin of the feasible domain, the algorithm generates
internal points. The internal points are obtained by combination of internal points and points generated
on the skin of the feasible domain.

Shifted Hammersley Sampling (Screening)


The Shifted Hammersley optimization method is the sampling strategy used for all sample generation,
except for the NLPQL method. The conventional Hammersley sampling algorithm is a quasi-random
number generator which has very low discrepancy and is used for quasi-Monte Carlo simulations. A
low-discrepancy sequence is defined as a sequence of points that approximate the equidistribution in
a multi-dimensional cube in an optimal way. In other words, the design space is populated almost
uniformly by these sequences and, due to the inherent properties of Monte Carlo sampling, dimension-
ality is not a problem (i.e., the number of points does not increase exponentially with an increase in
the number of input parameters). The conventional Hammersley algorithm is constructed by using the
radical inverse function. Any integer n can be represented as a sequence of digits n0, n1, n2, ..., nm by
the following equation:
= 0 1  3 m (20)

For example, consider the integer 687459, which can be represented this way as n0 = 6, n1 = 8, and so
on. Because this integer is represented with radix 10, we can write it as 687459 = 9 + 5 * 10 + 4 * 100
and so on. In general, for a radix R representation, the equation is:
=  +   ++  (21)

The inverse radical function is defined as the function which generates a fraction in (0, 1) by reversing
the order of the digits in Equation 21 (p. 232) about the decimal point, as shown below.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
232 of ANSYS, Inc. and its subsidiaries and affiliates.
Understanding Goal Driven Optimization

R = m m 1 m 2 0
1 2 (m 1) (22)
= m + m 1 ++ 0

Thus, for a k-dimensional search space, the Hammersley points are given by the following expression:
k =   k  (23)

where i = 0, ..., N indicates the sample points. Now, from the plot of these points, it is seen that the first
row (corresponding to the first sample point) of the Hammersley matrix is zero and the last row is not
1. This implies that, for the k-dimensional hypercube, the Hammersley sampler generates a block of
points that are skewed more toward the origin of the cube and away from the far edges and faces. To
compensate for this bias, a point-shifting process is proposed that shifts all Hammersley points by the
amount below:

= (24)

This moves the point set more toward the center of the search space and avoids unnecessary bias.
Thus, the initial population always provides unbiased, low-discrepancy coverage of the search space.

Single-Objective Optimization Methods


In this section, well discuss the different types of single-objective optimization methods.
Nonlinear Programming by Quadratic Lagrangian (NLPQL)
Mixed-Integer Sequential Quadratic Programming (MISQP)
Adaptive Single-Objective Optimization (ASO)

Nonlinear Programming by Quadratic Lagrangian (NLPQL)


NLPQL (Nonlinear Programming by Quadratic Lagrangian) is a mathematical optimization algorithm as
developed by Klaus Schittkowski. This method solves constrained nonlinear programming problems of
the form

Minimize:

Subject to:

 =

l = =

where

L U

It is assumed that objective function and constraints are continuously differentiable. The idea is to
generate a sequence of quadratic programming subproblems obtained by a quadratic approximation
of the Lagrangian function and a linearization of the constraints. Second order information is updated
by a quasi-Newton formula and the method is stabilized by an additional (Armijo) line search.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 233
DesignXplorer Theory

The method presupposes that the problem size is not too large and that it is well-scaled. Also, the ac-
curacy of the methods depends on the accuracy of the gradients. Since, for most practical problems,
analytical gradients are unavailable, it is imperative that the numerical (finite difference based) gradients
are as accurate as possible.

Newton's Iterative Method

Before the actual derivation of the NLPQL equations, Newtons iterative method for the solution of
nonlinear equation sets is reviewed. Let f(x) be a multivariable function such that it can be expanded
about the point x in a Taylors series.

+ + T + T (25)

where, it is assumed that the Taylor series actually models a local area of the function by a quadratic
approximation. The objective is to devise an iterative scheme by linearizing the vector Equation 25 (p. 234).
To this end, it is assumed that at the end of the iterative cycle, the Equation 25 (p. 234) would be exactly
valid. This implies that the first variation of the following expression with respect to x must be zero.

= + + + (26)

which implies that


+ x + = (27)

The first expression indicates the first variation of the converged solution with respect to the increment
in the independent variable vector. This gradient is necessarily zero since the converged solution clearly
does not depend on the step-length. Thus, Equation 27 (p. 234) can be written as the following:
1
j +1 = j j j (28)

where, the index "j" indicates the iteration. Equation 28 (p. 234) is thus used in the main quadratic pro-
gramming scheme.

NLPQL Derivation:

Consider the following single-objective nonlinear optimization problem. It is assumed that the problem
is smooth and analytic throughout and is a problem of N decision variables.

Minimize:

Subject to:

k =

l = =

where
L U (29)

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
234 of ANSYS, Inc. and its subsidiaries and affiliates.
Understanding Goal Driven Optimization

Where, K and L are the numbers of inequality and equality constraints. In many cases the inequality
constraints are bound above and below, in such cases, it is customary to split the constraint into two
inequality constraints. In order to approximate the quadratic sub-problem assuming the presence of
only equality constraints, the Lagrangian for Equation 29 (p. 234) as:
= + T (30)

Where, {} is a L dimensional vector (non-zero) which are used as Lagrange multipliers. Thus, Equa-
tion 30 (p. 235) becomes a functional, which depends on two sets of independent vectors. In order to
minimize this expression, we seek the stationarity of this functional with respect to the two sets of
vectors. These expressions give rise to two sets of vector expressions as the following:
{ x } = = + =
(31)
{ } = =

Equation 31 (p. 235) defines the Karush-Kuhn-Tucker (KKT) conditions which are the necessary conditions
for the existence of the optimal point. The first equation comprises "N" nonlinear algebraic equations
and the second one comprises "L" nonlinear algebraic equations. The matrix [H] is a (N*L) matrix defined
as the following:
(N*L ) = 1 2 3 (32)

Thus, in Equation 32 (p. 235), each column is a gradient of the corresponding equality constraint. For
our convenience, the nonlinear equations of Equation 31 (p. 235) can be written as the following:
= (33)

Where, Equation 33 (p. 235) is a (N+L) system of nonlinear equations. The independent variable set {Y}
can be written as:

= (34)

while the functional set {F} is written as the following:



= (35)

Referring to the section on Newton based methods, the vector {Y} in Equation 34 (p. 235) is updated as
the following:
j+ = j + j (36)

The increment of the vector is given by the iterative scheme in Equation 28 (p. 234). Referring to Equa-
tion 33 (p. 235), Equation 34 (p. 235), and Equation 35 (p. 235), the iterative equation is expressed as the
following:
  =  (37)

Note that this is only a first order approximation of the Taylor expansion of Equation 33 (p. 235). This is
in contrast to Equation 28 (p. 234) where a quadratic approximation is done. This is because in Equa-
tion 34 (p. 235), a first order approximation has already been done. The matrices and the vectors in
Equation 37 (p. 235) can be expanded as the following:

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 235
DesignXplorer Theory


= (38)

and
2
(N*N) (N* L )
= (39)
T
(L * N) (L * L )
((N + L *(N + L ))

This is obtained by taking the gradients of Equation 35 (p. 235) with respect to the two variable sets.
   
The sub-matrix is the (N*N) Hessian of the Lagrange function in implicit form.

To demonstrate how Equation 39 (p. 236) is formed, let us consider the following simple case. Given a
vector of two variables x and y, we write:




= =

It is required to find the gradient (i.e. Jacobian of the vector function V ). To this effect, the derivation
of the Jacobian is evident because in the present context, the vector V indicates a set of nonlinear (al-
gebraic) equations and the Jacobian is the coefficient matrix which "links" the increment of the inde-
pendent variable vector to the dependent variable vector. Thus, we can write:







= =





Thus, the Jacobian matrix is formed. In some applications, the equation is written in the following form:




= =


Where, the column "i" indicates the gradient of the "i-th" component of the dependent variable vector
with respect to the independent vector. This is the formalism we use in determining Equation 39 (p. 236).

Equation 37 (p. 235) may be rewritten as the following:

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
236 of ANSYS, Inc. and its subsidiaries and affiliates.
Understanding Goal Driven Optimization

( j)
2

( j) ( j)

(N*N) (N*L )
= (40)
T
(L * N) (L *L )

Solution of Equation 40 (p. 237) iteratively will solve Equation 41 (p. 237) in a series of linear steps till the
point when the increment is negligible. The update schemes for the independent variable and Lagrange
multiplier vectors x and are written as the following:
+ =1
+
(41)
+ = +
1

The individual equations of the Equation 40 (p. 237) are written separately now. The first equation (cor-
responding to minimization with respect to x) may be written as:




+ 
=
 





+ 
=  


(42)





+ 
+ =
  

The last step in Equation 42 (p. 237) is done by using Equation 41 (p. 237). Thus, using Equation 42 (p. 237)
and Equation 40 (p. 237), the iterative scheme can be rewritten in a simplified form as:
 


   

  
= (43)

+




     

Thus, Equation 43 (p. 237) can be used directly to compute the (j+1)th value of the Lagrange multiplier
vector. Note that by using Equation 43 (p. 237), it is possible to compute the update of x and the new
value of the Lagrange multiplier vector in the same iterative step. Equation 43 (p. 237) shows the
general scheme by which the KKT optimality condition can be solved iteratively for a generalized op-
timization problem.

Now, consider the following quadratic approximation problem as given by:



=
+


(44)

Subject to the equality constraints given by the following:

 
+ 
   = (45)

Where the definition of the matrices in Equation 44 (p. 237) and Equation 45 (p. 237) are given earlier. In
order to solve the quadratic minimization problem let us form the Lagrangian as:

= 
+ 


+ 
+  
(46)

Now, the KKT conditions may derived (as done earlier) by taking gradients of the Lagrangian in Equa-
tion 46 (p. 237) as the following:

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 237
DesignXplorer Theory

{ x } = + 2 + =
(47)
{ } = + T =

In a condensed matrix form Equation 47 (p. 238) may be written as the following:

= (48)


Equation 48 (p. 238) is the same as Equation 43 (p. 237) this implies that the iterative scheme in Equa-
tion 48 (p. 238) which actually solves a quadratic subproblem (Equation 44 (p. 237) and Equation 45 (p. 237))
in the domain x. In case the real problem is quadratic then this iterative scheme solves the exact
problem.

On addition of inequality constraints, the Lagrangian of the actual problem can be written as the fol-
lowing:
= +  +  +   (49)

Note that the inequality constraints have been converted to equality constraints by using a set of slack
variables y as the following:
+ = = (50)
k k

The squared term is used to ensure that the slack variable remains positive which is required to satisfy
Equation 50 (p. 238). The Lagrangian in Equation 49 (p. 238) acts as an enhanced objective function. It is
seen that the only case where the additional terms may be active is when the constraints are not satisfied.

The KKT conditions as derived from Equation 49 (p. 238) (by taking first variations with respect to the
independent variable vectors) are:
   = = + + = (N*1)
  = = (L *1)
 = (51)
  = = (M*1)
y = = (M*1)

From the KKT conditions in Equation 51 (p. 238), it is evident that there are (N + L + 2*M) equations for
a similar number of unknowns, thus this equation set possesses an unique solution. Let this (optimal)
solution be marked as x. At this point, a certain number of constraints will be active and some others
will be inactive. Let the number of active inequality constraints be p and the total number of active
equality constraints be q. By an active constraint it is assumed that the constraint is at its threshold
value of zero. Thus, let J1 and J2 be the sets of active and inactive equality constraints (respectively)
and K1 and K2 be the sets of active and inactive inequality constraints respectively. Thus, we can write
the following relations:

=
= =
=
(52)

=
= =
=

Where, meas() indicates the count of the elements of the set under consideration. These sets thus par-
tition the constraints into active and inactive sets. Thus, we can write:

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
238 of ANSYS, Inc. and its subsidiaries and affiliates.
Understanding Goal Driven Optimization

k
= 2

j
= j
= j

j 1 (53)
j
< j
= j

j 1

Thus, it can be stated that the last 3 equations in Equation 51 (p. 238) may be represented by the
Equation 53 (p. 239). These are the optimality conditions for constraint satisfaction. From these equations,
we can now eliminate y such that the Lagrangian in Equation 49 (p. 238) will depend on only 3 independ-
ent variable vectors. From the last two conditions in Equation 53 (p. 239), we can write the following
condition, which is always valid for an optimal point:
= (54)

Using Equation 54 (p. 239) in the set Equation 51 (p. 238), the KKT optimality conditions may be written
as the following:
= + + = (N*)

= (L *) (55)
= (M*)

Thus, the new set contains only (N + L + M) unknowns. Now, following the same logic as in Equa-
tion 34 (p. 235) as done earlierlet us express Equation 55 (p. 239) in the same form as in Equa-
tion 33 (p. 235). This represents a (N + L + M) system of nonlinear equations. The independent variable
set may be written in vector for as the following:


= (56)

Newtons iterative scheme is also used here, thus, the same equations as in Equation 36 (p. 235) and
Equation 37 (p. 235) also apply here. Thus, following Equation 37 (p. 235), we can write:


= (57)

Taking the first variation of the KKT equations in Equation 55 (p. 239) and equating to zero, the sub-
quadratic equation is formulated as the following:
 T

T


= (58)


Where indicates a diagonal matrix.

At the jth step, the first equation can be written as (by linearization)



+ 
+ 

=
 


 
 (59)

Which simplifies to

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 239
DesignXplorer Theory

j
j
+ j
+ +
j 1
+ =
j 1 j (60)

Thus, the linearized set of equations for Newtons method to be applied can be written in an explicit
form as:
 T T
 

 

+ =

(61)

   


+



  

So, in presence of both equality and inequality constraints, Equation 61 (p. 240) can be used in a quasi-
Newtonian framework to determine the increments {x}j and Lagrange multipliers {}j+1 and {}j+1 when
stepping from iteration j to j+1.

The Hessian matrix is not computed directly but is estimated and updated in a BFGS type line


search.

Mixed-Integer Sequential Quadratic Programming (MISQP)


MISQP (Mixed-Integer Sequential Quadratic Programming) is a mathematical optimization algorithm as
developed by Oliver Exler, Thomas Lehmann and Klaus Schittkowski (NLPQL). This method solves Mixed-
Integer Non-Linear Programming (MINLP) of the form:

Minimize:

f ( x y )

Subject to:

g

(   )=  = m
e



(
) =

+

where

 R n
c
 N n
i


l
 
u



 


The symbols x and y denote the vectors of the continuous and integer variables, respectively. It is as-

sumed that problem functions


( ) and 
( ) =
are continuously differentiable subject

to all R . It is not assumed that integer variables can be relaxed. In other words, problem functions



are evaluated only at integer points and never at any fractional values in between.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
240 of ANSYS, Inc. and its subsidiaries and affiliates.
Understanding Goal Driven Optimization

MISQP solves MINLP by a modified sequential quadratic programming (SQP) method. After linearizing
constraints and constructing a quadratic approximation of the Lagrangian function, mixed-integer
quadratic programs are successively generated and solved by an efficient branch-and-cut method. The
algorithm is stabilized by a trust region method as originally proposed by Yuan for continuous programs.
Second order corrections are retained. The Hessian of the Lagrangian function is approximated by BFGS
updates subject to the continuous and integer variables. MISQP is able to solve also non-convex nonlinear
mixed-integer programs.

References:

O. Exler, K. Schittkowski, T. Lehmann (2012): A comparative study of numerical algorithms for nonlinear and
nonconvex mixed-integer optimization, to appear: Mathematical Programming Computation.

O. Exler, T. Lehmann, K. Schittkowski (2012): MISQP: A Fortran subroutine of a trust region SQP algorithm for
mixed-integer nonlinear programming - user's guide, Report, Department of Computer Science, University
of Bayreuth.

O. Exler, K. Schittkowski (2007): A trust region SQP algorithm for mixed-integer nonlinear programming, Op-
timization Letters, Vol. 1, p.269-280.

Adaptive Single-Objective Optimization (ASO)


Adaptive Single-Objective is a mathematical optimization method that combines an OSF Design of
Experiments, a Kriging response surface, and the MISQP optimization algorithm. It is a gradient-based
algorithm based on a response surface which provides a refined, global, optimized result.

ASO optimization supports a single objective, multiple constraints, and is available for continuous
parameters and continuous parameters with Manufacturable Values. It does not support the use of
parameter relationships in the optimization domain and is available only for Direct Optimization systems.

Like the MISQP method, this method solves constrained nonlinear programming problems of the form:

Minimize:

Subject to:

k =

l = =

where

L U

The purpose is to refine and reduce the domain intelligently and automatically to provide the global
extrema.

Adaptive Single-Objective Workflow

The workflow of the Adaptive Single-Objective optimization method is as follows:

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 241
DesignXplorer Theory

Adaptive Single-Objective Steps

1. OSF Sampling

Optimal Space-Filling Design (OSF) is used for the Kriging construction. In the original OSF, the
number of samples equals the number of divisions per axis and there is one sample in each division.

When a new OSF is generated after a domain reduction, the reduced OSF has the same number
of divisions as the original and keeps the existing design points within the new bounds. New
design points are added until there is a point in each division of the reduced domain.

In the example below, the original domain has eight divisions per axis and contains eight design
points. The reduced domain also has eight divisions per axis and includes two of the original design
points. Six new design points need to be added in order to have a design point in each division.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
242 of ANSYS, Inc. and its subsidiaries and affiliates.
Understanding Goal Driven Optimization

Note

The total number of design points in the reduced domain can exceed the number in
the original domain if multiple existing points wind up in the same division. In the ex-
ample above, if two existing points wound up in the same division of the new domain,
seven new design points (rather than six) would have been added in order to have a
point in each of the remaining divisions.

2. Kriging Generation

A response surface is created for each output, based on the current OSF and consequently on the
current domain bounds.

For details on the Kriging algorithm, see Kriging (p. 78) or Kriging Algorithms (p. 217).

3. MISQP Algorithm

MISQP is run on the current Kriging response surface to find potential candidates. A few MISQP
processes are run at the same time, beginning with different starting points, and consequently,
giving different candidates.

4. Candidate Point Validation

All the obtained candidates are either validated or not, based on the Kriging error predictor. The
candidate point is checked to see if further refinement of the Kriging surface will change the selec-
tion of this point. A candidate is considered as acceptable if there arent any points, according to
this error prediction, that call it into question. If the quality of the candidate is called into question,
the domain bounds are reduced; otherwise, the candidate is calculated as a verification point.

Refinement Point Creation (If the selection will not be changed)

When a new verification point is calculated, it is inserted in the current Kriging as a refinement
point and the MISQP process is restarted.

Domain Reduction (If the selection will be changed)

When candidates are validated, new domain bounds must be calculated. If all of the candidates
are in the same zone, the bounds are reduced, centered on the candidates. Otherwise, the bounds

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 243
DesignXplorer Theory

are reduced as an inclusive box of all candidates. At each domain reduction, a new OSF is gen-
erated (conserving design points between the new bounds) and a new Kriging is generated
based on this new OSF.

5. Convergence and Stop Criteria

The optimization is considered to be converged when the candidates found are stable. However,
there are four stop criteria that can stop the algorithm: Maximum Number of Evaluations, Max-
imum Number of Domain Reductions, Percentage of Domain Reductions, and Convergence
Tolerance.

Multiple-Objective Optimization Methods


In this section, well discuss both theoretical aspects and the different optimization methods available
for multi-objective optimization.
Pareto Dominance in Multi-Objective Optimization
Convergence Criteria in MOGA-Based Multi-Objective Optimization
Multi-Objective Genetic Algorithm (MOGA)
Adaptive Multiple-Objective Optimization (AMO)

Pareto Dominance in Multi-Objective Optimization


The concept of Pareto dominance is of extreme importance in multi-objective optimization, especially
where some or all of the objectives and constraints are mutually conflicting. In such a case, there is no
single point that yields the "best" value for all objectives and constraints (i.e., the Utopia Point). Instead,
the best solutions, often called a Pareto or non-dominated set, are a group of solutions such that selecting
any one of them in place of another will always sacrifice quality for at least one objective or constraint,
while improving at least one other. Formally, the description of such Pareto optimality for generic op-
timization problems can be formulated as in the following equations.

Taking a closer, more formal look at the multi-objective optimization problem, let the following denote
the set of all feasible (i.e., do not violate constraints) solutions:
= = i u (62)

The problem can then be simplified to:


r
(63)

If there exists x' X such that for all objective functions x* is optimal. This, for i = 1, ..., k, is expressed:
(64)

This indicates that x* is certainly a desirable solution. Unfortunately, this is a utopian situation that rarely
exists, as it is unlikely that all f1(x) will have minimum values for X at a common point (x*). The question
is left: What solution should be used? That is, how should an "optimal" solution be defined? First, consider
the so-called ideal (utopian) solution. In order to define this solution, separately attainable minima must
be found for all objective functions. Assuming there is one, let x'1 be the solution of the scalar optimiz-
ation problem:


= *
 (65)

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
244 of ANSYS, Inc. and its subsidiaries and affiliates.
Understanding Goal Driven Optimization

Here f1' is called the individual minimum for the scalar problem i; the vector =
* * * i
is called i k

ideal for a multi-objective optimization problem; and the points in X which determined this vector is
the ideal solution.

It is usually not true that Equation 66 (p. 245) holds, although it would be useful, as the multi-objective
problem would have been solved by considering a sequence for scalar problems. It is necessary to
define a new form of optimality, which leads to the concept of Pareto Optimality. Introduced by V.
Pareto in 1896, it is still the most important part of multi-objective optimization.

= 


(66)

A point x' X is said to be Pareto Optimal for the problem if there is no other vector x X such that
for all i = 1, ..., k,




(67)

and, for at least one i {1, ..., k}:


< 

(68)

This definition is based on the intuitive conviction that the point x* X is chosen as the optimal if no
criterion can be improved without worsening at least one other criterion. Unfortunately, the Pareto
optimum almost always gives not a single solution, but a set of solutions. Usually Pareto optimality is
spoken of as being global or local depending on the neighborhood of the solutions X, and in this case,
almost all traditional algorithms can at best guarantee a local Pareto optimality. However, this MOGA-
based system, which incorporates global Pareto filters, yields the global Pareto front.

Convergence Criteria in MOGA-Based Multi-Objective Optimization


Convergence criteria are the conditions that indicate when the optimization has converged. In the
MOGA-based multi-objective optimization methods, the following convergence criteria are available:

Maximum Allowable Pareto Percentage


The Maximum Allowable Pareto Percentage criterion looks for a percentage that represents a specified
ratio of Pareto points per Number of Samples Per Iteration. When this percentage is reached, the optim-
ization is converged.

Convergence Stability Percentage


The Convergence Stability Percentage criterion looks for population stability, based on mean and
standard deviation of the output parameters. When a population is stable with regards to the previous
one, the optimization is converged. The criterion functions in the following sequence:

Population 1: When the optimization is run, the first population is not taken into account. Because
this population was not generated by the MOGA algorithm, it is not used as a range reference for the
output range (for scaling values).

Population 2: The second population is used to set the range reference. The minimum, maximum,
range, mean, and standard deviation are calculated for this population.

Populations 3 11: Starting from the third population, the minimum and maximum output values
are used in the next steps to scale the values (on a scale of 0 to 100). The mean variations and
standard deviation variations are checked; if both of these are smaller than the value for the Conver-
gence Stability Percentage property, the algorithm is converged.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 245
DesignXplorer Theory

So at each iteration and for each active output, convergence occurs if:
Mean Mean
1 S
<
Max Min

StdDev StdDev
1 S
<
Max Min

where:

S = Stability Percentage
Meani = Mean of the ith Population
StdDevi = Standard Deviation of the ith Population
Max = Maximum Output Value calculated on the first generated population of MOGA
Min = Minimum Output Value calculated on the first generated population of MOGA

Multi-Objective Genetic Algorithm (MOGA)


The MOGA used in GDO is a hybrid variant of the popular NSGA-II (Non-dominated Sorted Genetic Al-
gorithm-II) based on controlled elitism concepts. It supports all types of input parameters. The Pareto
ranking scheme is done by a fast, non-dominated sorting method that is an order of magnitude faster
than traditional Pareto ranking methods. The constraint handling uses the same non-dominance principle
as the objectives, thus penalty functions and Lagrange multipliers are not needed. This also ensures
that the feasible solutions are always ranked higher than the unfeasible solutions.

The first Pareto front solutions are archived in a separate sample set internally and are distinct from
the evolving sample set. This ensures minimal disruption of Pareto front patterns already available from
earlier iterations. You can control the selection pressure (and, consequently, the elitism of the process)
to avoid premature convergence by altering the Maximum Allowable Pareto Percentage property.
(For more information this and other MOGA properties, see Performing a MOGA Optimization (p. 132).)

MOGA Workflow

The workflow of the MOGA optimization method is as follows:

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
246 of ANSYS, Inc. and its subsidiaries and affiliates.
Understanding Goal Driven Optimization

dx_samplegen_moga

MOGA Steps

1. First Population of MOGA

The initial population is used to run the MOGA algorithm.

2. MOGA Generates a New Population

MOGA is run and generates a new population via Crossover and Mutation. After the first iteration,
each population is run when it reaches the number of samples defined by the Number of Samples
Per Iteration property. For details, see MOGA Steps to Generate a New Population (p. 248).

3. Design Point Update

The design points in the new population are updated.

4. Convergence Validation

The optimization is validated for convergence.

Yes: Optimization Converged

MOGA converges when the Maximum Allowable Pareto Percentage or the Convergence Sta-
bility Percentage has been reached.

No: Optimization Not Converged

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 247
DesignXplorer Theory

If the optimization is not converged, the process continues to the next step.

5. Stopping Criteria Validation

If the optimization has not converged, it is validated for fulfillment of stopping criteria.

Yes: Stopping Criteria Met

When the Maximum Number of Iterations criterion is met, the process is stopped without having
reached convergence.

No: Stopping Criteria Not Met

If the stopping criteria have not been met, the MOGA is run again to generate a new population
(return to Step 2).

6. Conclusion

Steps 2 through 5 are repeated in sequence until the optimization has converged or the stopping
criteria have been met. When either of these things occurs, the optimization concludes.

MOGA Steps to Generate a New Population

The process MOGA uses to generate a new population is comprised of two main steps: Crossover and
Mutation.

1. Crossover

Crossover combines (mates) two chromosomes (parents) to produce a new chromosome (offspring).
The idea behind crossover is that the new chromosome may be better than both of the parents if
it takes the best characteristics from each of the parents. Crossover occurs during evolution according
to a user-definable crossover probability.

Crossover for Continuous Parameters

A crossover operator that linearly combines two parent chromosome vectors to produce two new
offspring according to the following equations:

Offspring = Parent + ( ) Parent


Offspring = ( ) Parent + Parent
Consider the following two parents (each consisting of four floating genes) which have been se-
lected for crossover:

( )( )( )( )
( )( )( )(( )
If a = 0.7, the following two offspring would be produced:

( )( )( )( )
( )( )( )( )

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
248 of ANSYS, Inc. and its subsidiaries and affiliates.
Understanding Goal Driven Optimization

Crossover for Discrete / Manufacturable Values Parameters

Each discrete or Manufacturable Values parameter is represented by a binary chain corresponding


to the number of levels. For example, a parameter with two values (levels) is encoded to one bit,
a parameter with seven values is encoded to three bits, and an n-bits chain will represent a
parameter with 2(n-1) values.

The concatenation of these chains forms the chromosome, which will crossover with another
chromosome.

Three different kinds of crossover are available:

One Point

A One Point crossover operator that randomly selects a crossover point within a chromosome
then interchanges the two parent chromosomes at this point to produce two new offspring.

Two Point

A Two Point crossover operator randomly selects two crossover points within a chromosome then
interchanges the two parent chromosomes between these points to produce two new offspring.

Uniform

A Uniform crossover operator decides (with some probability, which is known as the mixing ratio)
which parent will contribute each of the gene values in the offspring chromosomes. This allows
the parent chromosomes to be mixed at the gene level rather than the segment level (as with
one and two point crossover). For some problems, this additional flexibility outweighs the disad-
vantage of destroying building blocks.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 249
DesignXplorer Theory

2. Mutation

Mutation alters one or more gene values in a chromosome from its initial state. This can result in
entirely new gene values being added to the gene pool. With these new gene values, the genetic
algorithm may be able to arrive at a better solution than was previously possible. Mutation is an
important part of the genetic search, as it helps to prevent the population from stagnating at any
local optima. Mutation occurs during evolution according to a user-defined mutation probability.

Mutation for Continuous Parameters

For continuous parameters, a polynomial mutation operator is applied to implement mutation.

C = P + UpperBound LowerBound
C P

Mutation for Discrete / Manufacturable Values Parameters

For discrete or Manufacturable Values parameters, a mutation operator simply inverts the value
of the chosen gene (0 goes to 1 and 1 goes to 0) with a probability of 0.5. This mutation operator
can only be used for binary genes. The concatenation of these chains forms the chromosome,
which will crossover with another chromosome.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
250 of ANSYS, Inc. and its subsidiaries and affiliates.
Understanding Goal Driven Optimization

Adaptive Multiple-Objective Optimization (AMO)


Adaptive Multiple-Objective is a mathematical optimization that combines a Kriging response surface
and the MOGA optimization algorithm. It allows you to either generate a new sample set or use an
existing set, providing a more refined approach than the Screening method. Except when necessary,
the optimizer does not evaluate all design points. The general optimization approach is the same as
MOGA, but a Kriging response surface is used; part of the population is simulated by evaluations of
the Kriging and the Kriging error predictor reduces the number of evaluations used in finding the first
Pareto front solutions.

AMO optimization supports multiple objectives, multiple constraints, and is limited to continuous
parameters and continuous parameters with Manufacturable Values. It is available only for Direct Op-
timization systems.

Note

AMO does not support discrete parameters because with discrete parameters, it is necessary
to construct a separate response surface for each discrete combination. When discrete
parameters are used, MOGA is the more efficient optimization method. For details on how
MOGA works, see Multi-Objective Genetic Algorithm (MOGA) (p. 246).

Adaptive Multiple-Objective Workflow

The workflow of the Adaptive Multiple-Objective optimization method is as follows:

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 251
DesignXplorer Theory

Adaptive Multiple-Objective Steps

1. First Population of MOGA

The initial population of MOGA is used for the Kriging construction.

2. Kriging Generation

A Kriging response surface is created for each output, based on the first population and then im-
proved during simulation with the addition of new design points.

For details on the Kriging algorithm, see Kriging (p. 78) or Kriging Algorithms (p. 217).

3. MOGA Algorithm

MOGA is run, using the Kriging as an evaluator. After the first iteration, each population is run
when it reaches the number of samples defined by the Number of Samples Per Iteration property.

4. Evaluate the Population

5. Error Check

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
252 of ANSYS, Inc. and its subsidiaries and affiliates.
Understanding Goal Driven Optimization

The Kriging error predictor is checked for each point.

Yes: Error Acceptable

Each point is validated for error. If the error for a given point is acceptable, the approximated
point is included in the next population to be run through the MOGA algorithm (return to Step
3).

No: Error Not Acceptable

If the error is not acceptable, the points are promoted as design points. The new design points
are used to improve the Kriging (return to Step 2) and are included in the next population to
be run through the MOGA algorithm (return to Step 3).

6. Convergence Validation

The optimization is validated for convergence.

Yes: Optimization Converged

MOGA converges when the maximum allowable Pareto percentage has been reached. When
this happens, the process is stopped.

No: Optimization Not Converged

If the optimization is not converged, the process continues to the next step.

7. Stopping Criteria Validation

If the optimization has not converged, it is validated for fulfillment of the stopping criteria.

Yes: Stopping Criteria Met

When the maximum number of iterations has been reached, the process is stopped without
having reached convergence.

No: Stopping Criteria Not Met

If the stopping criteria have not been met, the MOGA algorithm is run again (return to Step 3).

8. Conclusion

Steps 2 through 7 are repeated in sequence until the optimization has converged or the stopping
criteria have been met. When either of these things occurs, the optimization concludes.

Decision Support Process


The Decision Support Process is a goal-based, weighted, aggregation-based design ranking technique.
It is the final step in the optimization, where the optimization is post-processed.

During an optimization, the DesignXplorer optimizer generates a sample set as follows:

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 253
DesignXplorer Theory

Screening: the sample set corresponds to the number of Screening points plus the Min-Max Search
results (omitting search results that duplicate existing points)

Single-objective optimization (NLPQL, MISQP, ASO): the sample set corresponds to the iteration points

Multiple-objective optimization (MOGA, AMO): the sample set corresponds to the final population

The Decision Support Process sorts the sample set (using the cost function) in order to extract the best
candidates. The cost function takes into account both the Importance level of objectives and constraints
and the feasibility of points. (The feasibility of a point depends on how constraints are handled. When
the Constraint Handling property is set to Relaxed, all unfeasible points are included in the sort; when
the property is set to Strict, all unfeasible points are removed from the sort.) Once the sample set has
been sorted, you can change the Importance level and/or Constraint Handling properties for one or
more constraints or objectives without causing DesignXplorer to create more design points. The Decision
Support Process will resort the existing sample set.

Given n input parameters, m output parameters, and their individual targets, the collection of objectives
is combined into a single, weighted objective function, , which is sampled by means of a direct Monte
Carlo method using uniform distribution. The candidate points are subsequently ranked by ascending
magnitudes of the values of . The function for (where all continuous input parameters have usable
values of type "Continuous") is given by the following:
n m

i i
+ j j (69)
i =1 j =1

where:

wi and wj = weights defined in Equation 72 (p. 254)


Ni and Mi = normalized objectives for input and output parameters, respectively

The normalized objectives (metrics) are:



= t

(70)
u
l


=


(71)



ax  

where:

x = current value for input parameter i


xt, yt = corresponding "target value"
y = current value for output parameter j
xl and xu = lower and upper values, respectively, for input parameter i
ymin and ymax = corresponding lower and upper bounds, respectively, for output parameter j

The fuzziness of the combined objective function derives from the weights w, which are simply defined
as follows:
f he I
por ce s "Hgher"



= 
= f he I
po
o r ce s "Def  " (72)

f he I
por ce s "Lower"

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
254 of ANSYS, Inc. and its subsidiaries and affiliates.
Understanding Goal Driven Optimization

The labels used are defined in Defining Optimization Objectives and Constraints (p. 153).

The targets represent the desired values of the parameters, and are defined for the continuous input
parameters as follows:
if Objec ive Type is "No Objec ive"

l if Objec ive Ty
ype is "Minimize"
= 1 (73)
2( + )
t
if Objec ive Type is "Seek Targe "
l u


u if Objec ive Type is "Maximize"

and, for the output parameters we have the following desired values:
,  
      
 
 ,  
     

 * ,  C

   V < = U  Bd d U  Bd  dd d   *
 

,  C

   V < = U  Bd d U  B d  dd d  
*

  ,  
       

*


 ,  C

   V > = Lw Bd  d Lw Bd  dd d  
*

(74)

  ,  C

   V > = Lw Bd d Lw Bd  d
*
d d  
*


#$
 ,  
   !

,  C

   Lw Bd <= V <= U  Bd d  <=  <= 
%
% &

 %,  C

   Lw Bd <= V <= U  Bd d <
%

%
 ,  C

   Lw Bd <= V <= U  Bd d >
&

where:

yt * = user-specified target
yt1 = the constraint lower bound
yt2 = the constraint upper bound

Thus, Equation 72 (p. 254) and Equation 73 (p. 255) constitute the input parameter objectives for the
continuous input parameters and Equation 72 (p. 254) and Equation 74 (p. 255) constitute the output
parameter objectives and constraints.

The following section considers the case where the continuous input parameters have discrete Manu-
facturable values ("Levels" in the parameters' Table view) and there may be discrete input parameters
of type "Discrete." Let us consider the case where the Manufacturable Values of the continuous input
parameter are defined as the following:
= 0 ( ' ( (75)

with l usable values, the following metric is defined:


+
)
= (76)
-AX
-I.

where, as before,

xMAX = upper bound of the usable values

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 255
DesignXplorer Theory

xMIN = lower bound of the usable values

The target value xt is given by the following:


if Cons rain Type is "No Cons rain "
* 
t
if Cons rain Type is "Values <= Upper Bound" and "Upper Bound" is deffined and x x


if Cons rain Type is "Values <= Upper Bound" and "Upper Bound" is defined and x x

t
= *
(77)
t
if Objec ive Type is "Seek Targe "


if Cons rain Type is "Values >= Lower Bound" and "Lower Bound" is defined and x x

* 
t
if Cons rain Type is "Values >= Lower Bound" and "Lower Bound" is defined and x x


Thus, the GDO objective equation becomes the following (for parameters with discrete usable values).
 m 
 
+  
+   (78)
=
 1 =
 1 =
 1

Therefore, Equation 71 (p. 254), Equation 72 (p. 254), and Equation 76 (p. 255) constitute the input para-
meter objectives for parameters which may be continuous or possess discrete usable values.

The norms, objectives, and constraints as in equations Equation 76 (p. 255) and Equation 77 (p. 256) are
also adopted to define the input goals for the discrete input parameters of the type Discrete; i.e., those
discrete parameters whose usable alternatives indicate a whole number of some particular design feature
(number of holes in a plate, number of stiffeners, etc.).

Thus, equations Equation 71 (p. 254), Equation 72 (p. 254), and Equation 76 (p. 255) constitute the input
parameter goals for "Discrete" parameters.

Therefore, the GDO objective function equation for the most general case (where there are continuous
and discrete parameters) can be written as the following:

 
+

=

(  I )
=


(   )
(79)

+  
 = (M  + D )

where:

n = number of Continuous input parameters


m = number of Continuous output parameters
l = number of Continuous input parameters with Manufacturable Values Levels and Discrete
parameters

From the normed values it is obvious that the lower the value of , the better the design with respect
to the desired values and importances. Thus, a quasi-random uniform sampling of design points is done
by a Hammersley algorithm and the samples are sorted in ascending order of . The desired number
of designs are then drawn from the top of the sorted list. A crowding technique is employed to ensure
that any two sampled design points are not very close to each other in the space of the input parameters.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
256 of ANSYS, Inc. and its subsidiaries and affiliates.
Understanding Six Sigma Analysis

Rating Candidate Design Points


Each parameter range is divided into 6 zones, or rating scales. The location of a design candidate value
in the range is measured according to the rating scales. For example, for parameter X with a range of
0.9 to 1.1, the rating scale for a design candidate value of 1.0333 is calculated as follows:

(((Absolute(1.0333-1.1))/(1.1-0.9))*6)-(6/2) = 2.01-3 = -1 [one star] (as 0 indicating neutral, negative values


indicating closer to the target, up to -3; positive value indicating farther away from the target, up to
+3)

Following the same procedures, you will get rating scale for design candidate value of 0.9333 as 5.001-
3 = +2 [two crosses] (away from target). Therefore, the extreme cases are as follows:

1. Design Candidate value of 0.9 (the worst), the rating scale is 6-3 = +3 [three crosses]

2. Design Candidate value of 1.1 (the best), the rating scale is 0-3 = -3 [three stars]

3. Design Candidate value of 1.0 (neutral), the rating scale is 3-3 = 0 [dash]

Note

Objective-driven parameter values with inequality constraints receive either three stars (the
constraint is met) or three red crosses (the constraint is violated).

Understanding Six Sigma Analysis


A Six Sigma Analysis allows you to determine the extent to which uncertainties in the model affect the
results of an analysis. An uncertainty (or random quantity) is a parameter whose value is impossible to
determine at a given point in time (if it is time-dependent) or at a given location (if it is location-de-
pendent). An example is ambient temperature; you cannot know precisely what the temperature will
be one week from now in a given city.

A Six Sigma Analysis uses statistical distribution functions (such as the Gaussian, or normal, distribution,
the uniform distribution, etc.) to describe uncertain parameters.

Six Sigma Analysis allows you to determine whether your product satisfies Six Sigma quality criteria. A
product has Six Sigma quality if only 3.4 parts out of every 1 million manufactured fail. This quality
definition is based on the assumption that an output parameter relevant to the quality and performance
assessment follows a Gaussian distribution, as shown below.

An output parameter that characterizes product performance is typically used to determine whether a
product's performance is satisfactory. The parameter must fall within the interval bounded by the lower
specification limit (LSL) and the upper specification limit (USL). Sometimes only one of these limits exists.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 257
DesignXplorer Theory

An example of this is a case when the maximum von Mises stress in a component must not exceed the
yield strength. The relevant output parameter is, of course, the maximum von Mises stress and the USL
is the yield strength. The lower specification limit is not relevant. The area below the probability density
function falling outside the specification interval is a direct measure of the probability that the product
does not conform to the quality criteria, as shown above. If the output parameter does follow a Gaussian
distribution, then the product satisfies a Six Sigma quality criterion if both specification limits are at
least six standard deviations away from the mean value.

In reality, an output parameter rarely exactly follows a Gaussian distribution. However, the definition
of Six Sigma quality is inherently probabilistic -- it represents an admissible probability that parts do
not conform to the quality criteria defined by the specified limits. The nonconformance probability can
be calculated no matter which distribution the output parameter actually follows. For distributions
other than Gaussian, the Six Sigma level is not really six standard deviations away from the mean value,
but it does represent a probability of 3.4 parts per million, which is consistent with the definition of Six
Sigma quality.

This section describes the Six Sigma Analysis tool and how to use it to perform a Six Sigma Analysis.
Principles (SSA)
Guidelines for Selecting SSA Variables
Sample Generation
Weighted Latin Hypercube Sampling
Postprocessing SSA Results
Six Sigma Analysis Theory

Principles (SSA)
Computer models are described with specific numerical and deterministic values: material properties
are entered using certain values, the geometry of the component is assigned a certain length or width,
etc. An analysis based on a given set of specific numbers and values is called a deterministic analysis.
The accuracy of a deterministic analysis depends upon the assumptions and input values used for the
analysis.

While scatter and uncertainty naturally occur in every aspect of an analysis, deterministic analyses do
not take them into account. To deal with uncertainties and scatter, use Six Sigma Analysis, which allows
you to answer the following questions:

If the input variables of a finite element model are subject to scatter, how large is the scatter of the output
parameters? How robust are the output parameters? Here, output parameters can be any parameter that

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
258 of ANSYS, Inc. and its subsidiaries and affiliates.
Understanding Six Sigma Analysis

ANSYS Workbench can calculate. Examples are the temperature, stress, strain, or deflection at a node, the
maximum temperature, stress, strain, or deflection of the model, etc.

If the output is subject to scatter due to the variation of the input variables, then what is the probability
that a design criterion given for the output parameters is no longer met? How large is the probability that
an unexpected and unwanted event takes place (i.e., what is the failure probability)?

Which input variables contribute the most to the scatter of an output parameter and to the failure prob-
ability? What are the sensitivities of the output parameter with respect to the input variables?

Six Sigma Analysis can be used to determine the effect of one or more variables on the outcome of the
analysis. In addition to the Six Sigma Analysis techniques available, ANSYS Workbench offers a set of
strategic tools to enhance the efficiency of the Six Sigma Analysis process. For example, you can graph
the effects of one input parameter versus an output parameter, and you can easily add more samples
and additional analysis loops to refine your analysis.

In traditional deterministic analyses, uncertainties are either ignored or accounted for by applying
conservative assumptions. You would typically ignore uncertainties if you know for certain that the input
parameter has no effect on the behavior of the component under investigation. In this case, only the
mean values or some nominal values are used in the analysis. However, in some situations, the influences
of uncertainties exists but is still neglected, as for the thermal expansion coefficient, for which the
scatter is usually ignored.

Example 7: Accounting for Uncertainties

If you are performing a thermal analysis and want to evaluate the thermal stresses, the equation is:

therm = E T

because the thermal stresses are directly proportional to the Young's modulus as well as to the thermal
expansion coefficient of the material.

The table below shows the probability that the thermal stresses will be higher than expected, taking
uncertainty variables into account.

Uncertainty variables taken into Probability that the Probability that the
account thermal stresses are more thermal stresses are more
than 5% higher than ex- than 10% higher than ex-
pected pected
Young's modulus (Gaussian distribu- ~16% ~2.3%
tion with 5% standard deviation)
Young's modulus and thermal expan- ~22% ~8%
sion coefficient (each with Gaussian
distribution with 5% standard devi-
ation)

Reliability, Quality, and Safety Issues


Use Six Sigma Analysis when issues of reliability, quality, and safety are paramount.

Reliability is typically a concern when product or component failures have significant financial con-
sequences (costs of repair, replacement, warranty, or penalties) or worse, can result in injury or loss of
life.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 259
DesignXplorer Theory

If you use a conservative assumption, the difference in thermal stresses shown above tells you that
uncertainty or randomness is involved. Conservative assumptions are usually expressed in terms of
safety factors. Sometimes regulatory bodies demand safety factors in certain procedural codes. If you
are not faced with such restrictions or demands, then using conservative assumptions and safety factors
can lead to inefficient and costly over-design. By using Six Sigma Analysis methods, you can avoid over-
design while still ensuring the safety of the component.

Six Sigma Analysis methods even enable you to quantify the safety of the component by providing a
probability that the component will survive operating conditions. Quantifying a goal is the necessary
first step toward achieving it.

Guidelines for Selecting SSA Variables


This section presents useful guidelines for defining your Six Sigma Analysis variables.

Choosing and Defining Uncertainty Variables (p. 260)

Choosing and Defining Uncertainty Variables


First, you should:

Specify a reasonable range of values for each uncertainty variable.

Set reasonable limits on the variability for each uncertainty variable.

More information about choosing and defining uncertainty variables can be found in the following
sections.
Uncertainty Variables for Response Surface Analyses
Choosing a Distribution for a Random Variable
Distribution Functions

Uncertainty Variables for Response Surface Analyses


The number of simulation loops that are required for a Response Surface analysis depends on the
number of uncertainty variables. Therefore, you want to select the most important input variable(s),
the ones you know have a significant impact on the result parameters. If you are unsure which uncertainty
variables are important, include all of the random variables you can think of and then perform a Monte
Carlo analysis. After you learn which uncertainty variables are important and should be included in your
Response Surface Analysis, you can eliminate those that are unnecessary.

Choosing a Distribution for a Random Variable


The type and source of the data you have determines which distribution functions can be used or are
best suited to your needs.
Measured Data
Mean Values, Standard Deviation, Exceedance Values
No Data

Measured Data

If you have measured data, then you must first know how reliable that data is. Data scatter is not just
an inherent physical effect, but also includes inaccuracy in the measurement itself. You must consider
that the person taking the measurement might have applied a "tuning" to the data. For example, if the
data measured represents a load, the person measuring the load may have rounded the measurement

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
260 of ANSYS, Inc. and its subsidiaries and affiliates.
Understanding Six Sigma Analysis

values; this means that the data you receive are not truly the measured values. The amount of this
tuning could provide a deterministic bias in the data that you need to address separately. If possible,
you should discuss any bias that might have been built into the data with the person who provided
that data to you.

If you are confident about the quality of the data, then how you proceed depends on how much data
you have. In a single production field, the amount of data is typically sparse. If you have only a small
amount of data, use it only to evaluate a rough figure for the mean value and the standard deviation.
In these cases, you could model the uncertainty variable as a Gaussian distribution if the physical effect
you model has no lower and upper limit, or use the data and estimate the minimum and maximum
limit for a uniform distribution.

In a mass production field, you probably have a lot of data. In these cases you could use a commercial
statistical package that will allow you to actually fit a statistical distribution function that best describes
the scatter of the data.

Mean Values, Standard Deviation, Exceedance Values

The mean value and the standard deviation are most commonly used to describe the scatter of data.
Frequently, information about a physical quantity is given as a value such as "1005.5." Often, this form
means that the value "100" is the mean value and "5.5" is the standard deviation. Data in this form implies
a Gaussian distribution, but you must verify this (a mean value and standard deviation can be provided
for any collection of data regardless of the true distribution type). If you have more information, for
example, you know that the data is lognormal distributed, then Six Sigma Analysis allows you to use
the mean value and standard deviation for a lognormal distribution.

Sometimes the scatter of data is also specified by a mean value and an exceedance confidence limit.
The yield strength of a material is sometimes given in this way; for example, a 99% exceedance limit
based on a 95% confidence level is provided. This means that, from the measured data, we can be sure
by 95% that in 99% of all cases the property values will exceed the specified limit and only in 1% of all
cases will they drop below the specified limit. The supplier of this information is using the mean value,
the standard deviation, and the number of samples of the measured data to derive this kind of inform-
ation. If the scatter of the data is provided in this way, the best way to pursue this further is to ask for
more details from the data supplier. Because the given exceedance limit is based on the measured data
and its statistical assessment, the supplier might be able to provide you with the details that were used.

If the data supplier does not give you any further information, then you could consider assuming that
the number of measured samples was large. If the given exceedance limit is denoted with x1 - /2 and
the given mean value is denoted with x, then the standard deviation can be derived from the equation:

1 /2

where the values for the coefficient C are:

Exceedance Probability C
99.5% 2.5758
99.0% 2.3263
97.5% 1.9600
95.0% 1.6449

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 261
DesignXplorer Theory

Exceedance Probability C
90.0% 1.2816

No Data

In situations where no information is available, there is never just one right answer. Below are hints
about which physical quantities are usually described in terms of which distribution functions. This in-
formation might help you with the particular physical quantity you have in mind. Also below is a list
of which distribution functions are usually used for which kind of phenomena. Keep in mind that you
might need to choose from multiple options.

Geometric Tolerances

If you are designing a prototype, you could assume that the actual dimensions of the manufactured parts
would be somewhere within the manufacturing tolerances. In this case it is reasonable to use a uniform
distribution, where the tolerance bounds provide the lower and upper limits of the distribution function.

If the manufacturing process generates a part that is outside the tolerance band, one of two things may
happen: the part must either be fixed (reworked) or scrapped. These two cases are usually on opposite
ends of the tolerance band. An example of this is drilling a hole. If the hole is outside the tolerance band,
but it is too small, the hole can just be drilled larger (reworked). If, however, the hole is larger than the
tolerance band, then the problem is either expensive or impossible to fix. In such a situation, the parameters
of the manufacturing process are typically tuned to hit the tolerance band closer to the rework side,
steering clear of the side where parts need to be scrapped. In this case, a Beta distribution is more appro-
priate.

Often a Gaussian distribution is used. The fact that the normal distribution has no bounds (it spans minus
infinity to infinity), is theoretically a severe violation of the fact that geometrical extensions are described
by finite positive numbers only. However, in practice, this lack of bounds is irrelevant if the standard de-
viation is very small compared to the value of the geometric extension, as is typically true for geometric
tolerances.

Material Data

Very often the scatter of material data is described by a Gaussian distribution.

In some cases the material strength of a part is governed by the "weakest-link theory." The "weakest-link
theory" assumes that the entire part will fail whenever its weakest spot fails. For material properties where
the "weakest-link" assumptions are valid, the Weibull distribution might be applicable.

For some cases, it is acceptable to use the scatter information from a similar material type. For example,
if you know that a material type very similar to the one you are using has a certain material property with
a Gaussian distribution and a standard deviation of 5% around the measured mean value, then you can
assume that for the material type you are using, you only know its mean value. In this case, you could
consider using a Gaussian distribution with a standard deviation of 5% around the given mean value.

Load Data

For loads, you usually only have a nominal or average value. You could ask the person who provided
the nominal value the following questions: Out of 1000 components operated under real life conditions,
what is the lowest load value any one of the components sees? What is the most likely load value? That
is, what is the value that most of these 1000 components are subject to? What is the highest load value
any one component would be subject to? To be safe you should ask these questions not only of the
person who provided the nominal value, but also to one or more experts who are familiar with how

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
262 of ANSYS, Inc. and its subsidiaries and affiliates.
Understanding Six Sigma Analysis

your products are operated under real-life conditions. From all the answers you get, you can then
consolidate what the minimum, the most likely, and the maximum value probably is. As verification,
compare this picture with the nominal value that you would use for a deterministic analysis. The nom-
inal value should be close to the most likely value unless using a conservative assumption. If the nom-
inal value includes a conservative assumption (is biased), then its value is probably close to the maximum
value. Finally, you can use a triangular distribution using the minimum, most likely, and maximum values
obtained.

Distribution Functions
Beta Distribution
fX(x)

r,t

xmin

xmax

You provide the shape parameters r and t and the distribution lower bound and upper bound xmin and
xmax of the random variable x.

The Beta distribution is very useful for random variables that are bounded at both sides. If linear oper-
ations are applied to random variables that are all subjected to a uniform distribution, then the results
can usually be described by a Beta distribution. For example, if you are dealing with tolerances and
assemblies where the components are assembled and the individual tolerances of the components
follow a uniform distribution (a special case of the Beta distribution), the overall tolerances of the as-
sembly are a function of adding or subtracting the geometrical extension of the individual components
(a linear operation). Hence, the overall tolerances of the assembly can be described by a Beta distribution.
Also, as previously mentioned, the Beta distribution can be useful for describing the scatter of individual
geometrical extensions of components as well.

Exponential Distribution





You provide the decay parameter and the shift (or distribution lower bound) xmin of the random
variable x.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 263
DesignXplorer Theory

The exponential distribution is useful in cases where there is a physical reason that the probability
density function is strictly decreasing as the uncertainty variable value increases. The distribution is
mostly used to describe time-related effects; for example, it describes the time between independent
events occurring at a constant rate. It is therefore very popular in the area of systems reliability and
lifetime-related systems reliability, and it can be used for the life distribution of non-redundant systems.
Typically, it is used if the lifetime is not subjected to wear-out and the failure rate is constant with time.
Wear-out is usually a dominant life-limiting factor for mechanical components that would preclude the
use of the exponential distribution for mechanical parts. However, where preventive maintenance ex-
changes parts before wear-out can occur, then the exponential distribution is still useful to describe
the distribution of the time until exchanging the part is necessary.

Gaussian (Normal) Distribution


fX(x)

You provide values for the mean value and the standard deviation of the random variable x.

The Gaussian, or normal, distribution is a fundamental and commonly-used distribution for statistical
matters. It is typically used to describe the scatter of the measurement data of many physical phenomena.
Strictly speaking, every random variable follows a normal distribution if it is generated by a linear
combination of a very large number of other random effects, regardless which distribution these random
effects originally follow. The Gaussian distribution is also valid if the random variable is a linear combin-
ation of two or more other effects if those effects also follow a Gaussian distribution.

Lognormal Distribution


You provide values for the logarithmic mean value and the logarithmic deviation . The parameters
and are the mean value and standard deviation of ln(x):

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
264 of ANSYS, Inc. and its subsidiaries and affiliates.
Understanding Six Sigma Analysis

The lognormal distribution is another basic and commonly-used distribution, typically used to describe
the scatter of the measurement data of physical phenomena, where the logarithm of the data would
follow a normal distribution. The lognormal distribution is suitable for phenomena that arise from the
multiplication of a large number of error effects. It is also used for random variables that are the result
of multiplying two or more random effects (if the effects that get multiplied are also lognormally dis-
tributed). It is often used for lifetime distributions such as the scatter of the strain amplitude of a cyclic
loading that a material can endure until low-cycle-fatigue occurs.

Uniform Distribution
fX(x)

x x
min max

You provide the distribution lower bound and upper bound xmin and xmax of the random variable x.

The uniform distribution is a fundamental distribution for cases where the only information available
is a lower and an upper bound. It is also useful to describe geometric tolerances. It can also be used in
cases where any value of the random variable is as likely as any other within a certain interval. In this
sense, it can be used for cases where "lack of engineering knowledge" plays a role.

Triangular Distribution


 lv 

You provide the minimum value (or distribution lower bound) xmin, the most likely value limit xmlv and
the maximum value (or distribution upper bound) xmax.

The triangular distribution is most helpful to model a random variable when actual data is not available.
It is very often used to capture expert opinions, as in cases where the only data you have are the well-
founded opinions of experts. However, regardless of the physical nature of the random variable you

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 265
DesignXplorer Theory

want to model, you can always ask experts questions like "Out of 1000 components, what are the lowest
and highest load values for this random variable?" and other similar questions. You should also include
an estimate for the random variable value derived from a computer program, as described above. For
more details, see Choosing a Distribution for a Random Variable (p. 260).

Truncated Gaussian (Normal) Distribution


fX(x)

2 G

xmin xmax

G
x

You provide the mean value and the standard deviation of the non-truncated Gaussian distribution
and the truncation limits xmin and xmax (or distribution lower bound and upper bound).

The truncated Gaussian distribution typically appears where the physical phenomenon follows a Gaus-
sian distribution, but the extreme ends are cut off or are eliminated from the sample population by
quality control measures. As such, it is useful to describe the material properties or geometric tolerances.

Weibull Distribution


,chr




You provide the Weibull characteristic value xchr , the Weibull exponent m, and the minimum value xmin
(or distribution lower bound). There are several special cases. For xmin = 0 the distribution coincides
with a two-parameter Weibull distribution. The Rayleigh distribution is a special case of the Weibull
distribution with = xchr - xmin and m = 2.

In engineering, the Weibull distribution is most often used for strength or strength-related lifetime
parameters, and is the standard distribution for material strength and lifetime parameters for very brittle
materials (for these very brittle materials, the "weakest-link theory" is applicable). For more details, see
Choosing a Distribution for a Random Variable (p. 260).

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
266 of ANSYS, Inc. and its subsidiaries and affiliates.
Understanding Six Sigma Analysis

Sample Generation
For Six Sigma Analysis, the sample generation by default is based on the Latin Hypercube Sampling
(LHS) technique. (You can also select Weighted Latin Hypercube Sampling (WLHS) as the Sampling Type
in the Six Sigma Properties view.) The LHS technique is a more advanced and efficient form of Monte
Carlo analysis methods. In a Latin Hypercube Sampling, the points are randomly generated in a square
grid across the design space, but no two points share input parameters of the same value (i.e., so no
point shares a row or a column of the grid with any other point. Generally, the LHS technique requires
20% to 40% fewer simulations loops than the Direct Monte Carlo analysis technique to deliver the same
results with the same accuracy. However, that number is largely problem-dependent.

Weighted Latin Hypercube Sampling


In some cases, a very small probability of failure (Pf ), e.g., in the order of 10-6, is of interest in engineering
design. It would require 100/Pf simulations/runs of regular Latin Hypercube samples. To reduce number
of runs, the Weighted Latin Hypercube Sampling (WLHS) may be used to generate biased/more samples
in the region.

In WLHS, the input variables are discretized unevenly/unequally in their design space. The cell (or hy-
percube, in multiple dimensions) size of probability of occurrence is evaluated according to topology
of output/response. The cell size of probability of occurrence is discretized such that it is smaller (relat-
ively) around minimum and maximum of output/response. As evaluation of cell size is output/response
oriented, WLHS is somewhat unsymmetrical/biased.

In general, the WLHS is intended to stretch distribution farther out in the tails (lower and upper) with
less number of runs than Latin Hypercube Sampling (LHS). In other words, given same number of runs,
WLHS is expected to reach a smaller Pf when compared to LHS. Due to biasness, however, the evaluated
Pf may subject to some difference compared to LHS.

Postprocessing SSA Results


Histogram (p. 267)

Cumulative Distribution Function (p. 268)

Probability Table (p. 269)

Statistical Sensitivities in a Six Sigma Analysis (p. 270)

Histogram
A histogram plot is most commonly used to visualize the scatter of a Six Sigma Analysis variable. A
histogram is derived by dividing the range between the minimum value and the maximum value into
intervals of equal size. Then Six Sigma Analysis determines how many samples fall within each interval,
that is, how many "hits" landed in each interval.

Six Sigma Analysis also allows you to plot histograms of your uncertainty variables so you can double-
check that the sampling process generated the samples according to the distribution function you
specified. For uncertainty variables, Six Sigma Analysis not only plots the histogram bars, but also a
curve for values derived from the distribution function you specified. Visualizing histograms of the un-
certainty variables is another way to verify that enough simulation loops have been performed. If the
number of simulation loops is sufficient, the histogram bars will:

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 267
DesignXplorer Theory

Be close to the curve that is derived from the distribution function

Be "smooth" (without large "steps")

Not have major gaps (no hits in an interval where neighboring intervals have many hits)

However, if the probability density function is flattening out at the far ends of a distribution (for example,
the exponential distribution flattens out for large values of the uncertainty variable) then there might
logically be gaps. Hits are counted only as positive integer numbers and as these numbers gradually
get smaller, a zero hit can happen in an interval.

Cumulative Distribution Function


The cumulative distribution function is a primary review tool if you want to assess the reliability or the
failure probability of your component or product. Reliability is defined as the probability that no failure
occurs. Hence, in a mathematical sense, reliability and failure probability are different ways of examining
the same problem and numerically they complement each other (they sum to 1.0). The cumulative
distribution function value at any given point expresses the probability that the respective parameter
value will remain below that point.

The value of the cumulative distribution function at the location x0 is the probability that the values of
X stay below x0. Whether this probability represents the failure probability or the reliability of your
component depends on how you define failure; for example, if you design a component such that a
certain deflection should not exceed a certain admissible limit, then a failure event occurs if the critical
deflection exceeds this limit. Thus, for this example, the cumulative distribution function is interpreted
as the reliability curve of the component. On the other hand, if you design a component such that the
eigenfrequencies are beyond a certain admissible limit, then a failure event occurs if an eigenfrequency
drops below this limit. So for this example, the cumulative distribution function is interpreted as the
failure probability curve of the component.

The cumulative distribution function also lets you visualize what the reliability or failure probability
would be if you chose to change the admissible limits of your design.

A cumulative distribution function plot is an important tool to quantify the probability that the design
of your product does or does not satisfy quality and reliability requirements. The value of a cumulative
distribution function of a particular output parameter represents the probability that the output para-
meter will remain below a certain level as indicated by the values on the x-axis of the plot.

Example 8: Illustration of Cumulative Distribution Function

The probability that the Shear Stress Maximum will remain less than a limit value of 1.71E+5 is about
93%, which also means that there is a 7% probability that the Shear Stress Maximum will exceed the
limit value of 1.71E+5.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
268 of ANSYS, Inc. and its subsidiaries and affiliates.
Understanding Six Sigma Analysis

Figure 10: Cumulative Distribution Function

For more information, see Six Sigma Analysis Theory (p. 271).

Probability Table
Instead of reading data from the cumulative distribution chart, you can also obtain important information
about the cumulative distribution function in tabular form. A Probability Table is available that is de-
signed to provide probability values for an even spread of levels of an input or output parameter. You
can view the table in either Quantile-Percentile (Probability) mode or Percentile-Quantile (Inverse
Probability) mode. The Probability Table lets you find out the parameter levels corresponding to prob-
ability levels that are typically used for the design of reliable products. If you want to see the probability
of a value that is not listed, you can add it to the table. Likewise, you can add a probability or sigma-
level and see the corresponding values. You can also delete values from the table. For more information,
see Using Statistical Postprocessing (p. 181).

Figure 11: Probability Tables

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 269
DesignXplorer Theory

Note

Both tables will have more rows, i.e., include more data, if the number of samples is increased.
If you are designing for high product reliability, i.e., a low probability that it does not conform
to quality or performance requirements, then the sample size must be adequately large to
address those low probabilities. Typically, if your product does not conform to the require-
ments denoted with "Preq," then the minimum number of samples should be determined
by: Nsamp = 10.0 / Preq.

For example, if your product has a probability of Preq=1.0 e-4 that it does not conform to
the requirements, then the minimum number of samples should be Nsamp = 10.0 / 1.0e-4
= 10e+5.

Statistical Sensitivities in a Six Sigma Analysis


The available sensitivities plots allow you to efficiently improve your design toward a more reliable and
better quality design, or to save money in the manufacturing process while maintaining the reliability
and quality of your product. You can view a sensitivities plot for any output parameter in your model.
See Sensitivities Chart (SSA) (p. 49) for more information.

The sensitivities available under the Six Sigma Analysis and the Goal Driven Optimization views are
statistical sensitivities. Statistical sensitivities are global sensitivities, whereas the parameter sensitivities
available under the Responses view are local sensitivities. The global, statistical sensitivities are based
on a correlation analysis using the generated sample points, which are located throughout the entire
space of input parameters. The local parameter sensitivities are based on the difference between the
minimum and maximum value obtained by varying one input parameter while holding all other input
parameters constant. As such, the values obtained for local parameter sensitivities depend on the values
of the input parameters that are held constant. Global, statistical sensitivities do not depend on the
values of the input parameters, because all possible values for the input parameters are already taken
into account when determining the sensitivities.

Design exploration displays sensitivities as both a bar chart and pie chart. The charts describe the
sensitivities in an absolute fashion (taking the signs into account); a positive sensitivity indicates that
increasing the value of the uncertainty variable increases the value of the result parameter for which

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
270 of ANSYS, Inc. and its subsidiaries and affiliates.
Understanding Six Sigma Analysis

the sensitivities are plotted. Conversely, a negative sensitivity indicates that increasing the uncertainty
variable value reduces the result parameter value.

Using a sensitivity plot, you can answer the following important questions.

How can I make the component more reliable or improve its quality?
If the results for the reliability or failure probability of the component do not reach the expected levels,
or if the scatter of an output parameter is too wide and therefore not robust enough for a quality
product, then you should make changes to the important input variables first. Modifying an input
variable that is insignificant would be a waste of time.

Of course, you are not in control of all uncertainty parameters. A typical example where you have very
limited means of control involves material properties. For example, if it turns out that the environmental
temperature (outdoor) is the most important input parameter, then there is probably nothing you can
do. However, even if you find out that the reliability or quality of your product is driven by parameters
that you cannot control, this data has importance -- it is likely that you have a fundamental flaw in your
product design! You should watch for influential parameters like these.

If the input variable you want to tackle is a geometry-related parameter or a geometric tolerance, then
improving the reliability and quality of your product means that it might be necessary to change to a
more accurate manufacturing process or use a more accurate manufacturing machine. If it is a material
property, then there might be nothing you can do about it. However, if you only had a few measurements
for a material property and consequently used only a rough guess about its scatter, and the material
property turns out to be an important driver of product reliability and quality, then it makes sense to
collect more raw data.

How can I save money without sacrificing the reliability or the quality of the product?
If the results for the reliability or failure probability of the component are acceptable or if the scatter
of an output parameter is small and therefore robust enough for a quality product, then there is usually
the question of how to save money without reducing the reliability or quality. In this case, you should
first make changes to the input variables that turned out to be insignificant, because they do not affect
the reliability or quality of your product. If it is the geometrical properties or tolerances that are insigni-
ficant, you can consider applying a less expensive manufacturing process. If a material property turns
out to be insignificant, then this is not typically a good way to save money, because you are usually
not in control of individual material properties. However, the loads or boundary conditions can be a
potential for saving money, but in which sense this can be exploited is highly problem-dependent.

Six Sigma Analysis Theory


The purpose of a Six Sigma Analysis is to gain an understanding of the impact of uncertainties associated
with the input parameter of your design. This goal is achieved using a variety of statistical measures
and postprocessing tools.

Statistical Postprocessing
Convention: Set of data xi.

1. Mean Value

Mean is a measure of average for a set of observations. The mean of a set of n observations is
defined as follows:

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 271
DesignXplorer Theory

n
^ = i (80)
i =1

2. Standard Deviation

Standard deviation is a measure of dispersion from the mean for a set of observations. The standard
deviation of a set of n observations is defined as follows:

=  2
(81)
 =

3. Sigma Level

Sigma level is calculated as the inverse cumulative distribution function of a standard Gaussian dis-
tribution at a given percentile. Sigma level is used in conjunction with standard deviation to measure
data dispersion from the mean. For example, for a pair of quantile X and sigma level n, it means
that value X is about n standard deviations away from the sample mean.

4. Skewness

Skewness is a measure of degree of asymmetry around the mean for a set of observations. The ob-
servations are symmetric if distribution of the observations looks the same to the left and right of
the mean. Negative skewness indicates the distribution of the observations being left-skewed. Pos-
itive skewness indicates the distribution of the observations being right-skewed. The skewness of
a set of n observations is defined as follows:
3





 = (82)
 =

5. Kurtosis

Kurtosis is a measure of relative peakedness/flatness of distribution for a set of observations. It is


generally a relative comparison with the normal distribution. Negative kurtosis indicates a relatively
flat distribution of the observations compared to the normal distribution, while positive kurtosis
indicates a relatively peaked distribution of the observations. As such, the kurtosis of a set of n ob-
servations is defined with calibration to the normal distribution as follows:

4
+




 = (83)
=

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
272 of ANSYS, Inc. and its subsidiaries and affiliates.
Understanding Six Sigma Analysis

where and represent mean and standard deviation, respectively.


^

Table 1: Different Types of Kurtosis

<0 =0 >0
< 0 for flat distribution
= 0 for normal distribution
> 0 for peaked distribution

6. Shannon Entropy

Entropy is a concept first introduced in classical thermodynamics. It is a measure of disorder in a


system. The concept of entropy was later extended and recognized by Claude Shannon in the domain
of information/communication as a measure of uncertainty over the true content of a message. In
statistics, the entropy is further reformulated as a function of mass density, as follows:
= i i (84)

where Pi represents mass density of a parameter. In the context of statistics/probability, entropy


becomes a measure of complexity and predictability of a parameter. A more complex, and less
predictable, parameter carries higher entropy, and vice versa. For example, a parameter characterized
with uniform distribution has higher entropy than that with truncated triangular distribution (within
the same bounds). Hence, the parameter characterized with uniform distribution is less predictable
compared to the triangular distribution.

 = 
(85)

The probability mass is used in a normalized fashion, such that not only the shape, but the range
of variability, or the distribution is accounted for. This is shown in Equation 85 (p. 273), where Ni/N
is the relative frequency of the parameter falling into a certain interval, and w is the width of the
interval. As a result, Shannon entropy can have a negative value. Below are some comparisons of
the Shannon entropy, where S2 is smaller than S1.

7. Taguchi Signal-to-Noise Ratios

Three signal-to-noise ratios are provided in the statistics of each output in your SSA. These ratios
are as follows:

Nominal is Best

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 273
DesignXplorer Theory

Smaller is Better

Larger is Better

Signal-to-noise (S/N) ratios are measures used to optimize control parameters in order to achieve a
robust design. These measures were first proposed by Dr. Taguchi of Nippon Telephone and Telegraph
Company, Japan, to reduce design noises in manufacturing processes. These design noises are nor-
mally expressed in statistical terms such as mean and standard deviation (or variance). In computer
aided engineering (CAE), these ratios have been widely used to achieve a robust design in computer
simulations. For a design to be robust, the simulations are carried out with an objective to minimize
the variances. The minimum variance of designs/simulations can be done with or without targeting
a certain mean. In design exploration, minimum variance targeted at a certain mean (called Nominal
is Best) is provided, and is given as follows:

^
= (86)
^

where and represent mean and standard deviation, respectively.




Nominal is Best is a measure used for characterizing design parameters such as model dimension
in a tolerance design, in which a specific dimension is required, with an acceptable standard deviation.

In some designs, however, the objectives are to seek a minimum or a maximum possible at the price
of any variance.

For the cases of minimum possible (Smaller is Better), which is a measure used for characterizing output
parameters such as model deformation, the S/N is expressed as follows:
n
2
= i (87)
i =1

For the cases of maximum possible (Larger is Better), which is a measure used for characterizing output
parameters such as material yield, the S/N is formulated as follows:

= (88)

 =


These three S/N ratios are mutually exclusive. Only one of the ratios can be optimized for any given
parameter. For a better design, these ratios should be maximized.

8. Minimum and Maximum Values

The minimum and maximum values of a set of n observations are:


m =   (89)

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
274 of ANSYS, Inc. and its subsidiaries and affiliates.
Understanding Six Sigma Analysis

max = 1 2 n (90)

Note

The minimum and maximum values strongly depend on the number of samples. If you
generate a new sample set with more samples, then chances are that the minimum value
will be lower in the larger sample set. Likewise, the maximum value of the larger sample
set will most likely be higher than for the original sample set. Hence, the minimum and
maximum values should not be interpreted as absolute physical bounds.

9. Statistical Sensitivity Measures

The sensitivity charts displayed under the Six Sigma Analysis view are global sensitivities based
on statistical measures (see Single Parameter Sensitivities). Generally, the impact of an input para-
meter on an output parameter is driven by:

The amount by which the output parameter varies across the variation range of an input parameter.

The variation range of an input parameter. Typically, the wider the variation range is, the larger the
impact of the input parameter will be.

The statistical sensitivities used under Six Sigma Analysis are based on the Spearman-Rank Order
Correlation coefficients that take both those aspects into account at the same time.

Basing sensitivities on correlation coefficients follows the concept that the more strongly an output
parameter is correlated with a particular input parameter, the more sensitive it is with respect to
changes of that input parameter.

10. Spearman Rank-Order Correlation Coefficient

The Spearman rank-order correlation coefficient is:

i i
i
s= (91)
 
i i
i i

where:

T
  ...
Ri = rank of xi within the set of observations   

y y

y
Si = rank of yi within the set of observations  
R, S
= average ranks of a Ri and Si respectively

11. Significance of Correlation Coefficients

Since the sample size n is finite, the correlation coefficient rp is a random variable itself. Hence, the
correlation coefficient between two random variables X and Y usually yields a small, but nonzero
value, even if X and Y are not correlated at all in reality. In this case, the correlation coefficient would

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 275
DesignXplorer Theory

be insignificant. Therefore, we need to find out if a correlation coefficient is significant or not. To


determine the significance of the correlation coefficient, we assume the hypothesis that the correl-
ation between X and Y is not significant at all, i.e., they are not correlated and rp = 0 (null hypothesis).
In this case the variable:

= P (92)
P2

is approximately distributed like the student's t-distribution with = n - 2 degrees of freedom. The
cumulative distribution function student's t-distribution is:
+1
t

=
+
t


(93)

where:

B(...) = complete Beta function

There is no closed-form solution available for Equation 93 (p. 276). See Abramowitz and Stegun
(Pocketbook of Mathematical Functions, abridged version of the Handbook of Mathematical Functions,
Harry Deutsch, 1984) for more details.

The larger the correlation coefficient rp, the less likely it is that the null hypothesis is true. Also, the
larger the correlation coefficient rp, the larger is the value of t from Equation 92 (p. 276) and con-
sequently also the probability A(t|) is increased. Therefore, the probability that the null hypothesis
is true is given by 1-A(t|). If 1-A(t|) exceeds a certain significance level, for example 2.5%, then we
can assume that the null hypothesis is true. However, if 1-A(t|) is below the significance level, then
it can be assumed that the null hypotheses is not true and that consequently the correlation coeffi-
cient rp is significant. This limit can be changed in Design Exploration Options (p. 19).

12. Cumulative Distribution Function

The cumulative distribution function of sampled data is also called the empirical distribution function.
To determine the cumulative distribution function of sampled data, you need to order the sample
values in ascending order. Let xi be the sampled value of the random variable X having a rank of i,
i.e., being the ith smallest out of all n sampled values. The cumulative distribution function Fi that
corresponds to xi is the probability that the random variable X has values below or equal to xi. Since
we have only a limited amount of samples, the estimate for this probability is itself a random variable.
According to Kececioglu (Reliability Engineering Handbook, Vol. 1, 1991, Prentice-Hall, Inc.), the cumu-
lative distribution function Fi associated with xi is:
n
k i n k =

i (94)
k =i

Equation 94 (p. 276) must be solved numerically.

13. Probability and Inverse Probability

The cumulative distribution function of sampled data can only be given at the individual sampled
values x1, x2, ..., xi, xi+1, ..., xn using Equation 94 (p. 276). Hence, the evaluation of the probability that

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
276 of ANSYS, Inc. and its subsidiaries and affiliates.
Understanding Six Sigma Analysis

the random variable is less than or equal to an arbitrary value x requires an interpolation between
the available data points.

If x is, for example, between xi and xi+1, then the probability that the random variable X is less or
equal to x is:

= + + i
(95)
+
i i 1 i

i 1 i

The cumulative distribution function of sampled data can only be given at the individual sampled
values x1, x2, ..., xi, xi+1, ..., xn using Equation 94 (p. 276). Hence, the evaluation of the inverse cumu-
lative distribution function for any arbitrary probability value requires an interpolation between the
available data points.

The evaluation of the inverse of the empirical distribution function is most important in the tails of
the distribution, where the slope of the empirical distribution function is flat. In this case, a direct
interpolation between the points of the empirical distribution function similar to Equation 95 (p. 277)
can lead to inaccurate results. Therefore, the inverse standard normal distribution function -1 is
applied for all probabilities involved in the interpolation. If p is the requested probability for which
we are looking for the inverse cumulative distribution function value, and p is between Fi and Fi+1,
then the inverse cumulative distribution function value can be calculated using:



= + + (96)



+ 

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 277
Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
278 of ANSYS, Inc. and its subsidiaries and affiliates.
Troubleshooting
An ANSYS DesignXplorer Update operation returns an error message saying that one or several design
points have failed to update.
Click the Show Details button in the error dialog to see the list of design points that failed to update
and the associated error messages. Each failed design point is automatically preserved for you at the
project level, so you can Return to the Project and Edit the Parameter Set cell in order to see the
failed design points and investigate the reason for the failures.

This error means that the project failed to update correctly for the parameter values defined in the
listed design points. There may be a variety of failure reasons, from a lack of a license to a CAD
model generation failure, for instance. If you try to update the system again, only the failed design
points will be attempted for a new Update, and if the update is successful, the operation will com-
plete. If there are persistent failures with a design point, copy its values to the Current Design Point,
attempt a Project Update, and edit the cells in the project that are not correctly updated in order
to investigate.

When you open a project, one or more DesignXplorer components are marked with a black "X" icon
on the Project Schematic.
The black X icon next to a DesignXplorer component indicates a component Status of Disabled. This
status is given to DesignXplorer components that fail to load properly because the project has been
corrupted by the absence of necessary files (most often the parameters.dxdb and paramet-
ers.params files). The Messages view displays a warning message indicating the reason for the failure
and listing each of the disabled components.

When a DesignXplorer component is disabled, you cannot Update, Edit, or Preview the component.
Try one of the following methods to address the issue.

Method 1: (recommended) Navigate to the Files view, locate the missing files if possible, and
copy them into the correct project folder (the parameters.dxdb and parameters.params
files are normally located in the subdirectory <project name>_files\dpall\global\DX).
With the necessary files restored, the disabled components should load successfully.

Method 2: Delete all DesignXplorer systems containing disabled components. Once the project
is free of corrupted components, you can save the project and set up new DesignXplorer systems.

Workbench interacts with the Excel application.


When an Excel file is added to an Excel component in a project, an instance of the Excel application is
launched in background. Then, if you either double-click on any other Excel workbook in the Windows
Explorer window or click on any other Excel workbook from the Recent items option on the Start Menu
in order to open it, this action attempts to use the instance of the Excel application that is already running.
As a result, all of the background workbooks are displayed in foreground and any update of the Excel
components in Workbench prevents the user from modifying any opened Excel workbooks. As such, it
is recommended that to open or view a different workbook, you do not use the same instance of Excel
that Workbench is using.

To open or view a different Excel workbook (other than the ones added to the project):

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 279
Troubleshooting

1. Start a new instance of Excel via the Start menu (i.e., Start > All Programs > Microsoft Office >
Microsoft Excel).

2. From the new instance of Excel, open the workbook via the File > Open menu option.

The workbook will open in the new instance of Excel.

Note

You can still view the Excel workbooks added to the project by selecting Open File in
Excel from the right-click context menu.

Fatal error causes project to get stuck in a Pending state.


If a fatal error occurs while a cell of DesignXplorer is either in a Pending state or resuming from a pending
update, the project could be stuck in the Pending state even if the pending updates are no longer running.

To recover from this situation, you can abandon the pending updates by selecting the Abandon
Pending Updates option from the Tools menu.

Incorrect derived parameter definition causes a DesignXplorer component to be in an Unfulfilled state.


Derived parameters must be correctly defined in order to update a DesignXplorer components or a
project. If a derived parameter has an invalid parameter definition, which can be caused when the
parameter is defined without units or the value of the Expression property has an incorrect syntax.
DesignXplorer system cells will remain in an Unfulfilled state until the error is fixed.

Additionally, prior to Workbench version 14.0, some DesignXplorer components could have been
incorrectly updated in spite of the error in the parameter definition. If you open a project created
with an earlier release version and find that components that were previously in an Up to Date
state are now in an Unfulfilled state, verify that the derived parameters in the project are correctly
defined.

To verify derived parameter definition, right-click on the Parameter Set bar and select Edit to open
the Parameter Set tab. In the Outline of All Parameters view, derived parameters that are incor-
rectly defined will have a red cell containing an error message in the Value column. Below, in the
Properties of Outline view, the same error message will display in the Expression property cell.

Based on the information in the error message, correct derived parameter definitions as follows:

1. Select the parameter in the Outline of All Parameters view.

2. In the Outline of All Parameters view, enter a valid expression for the Expression property.

3. Refresh the project or the unfulfilled DesignXplorer component to apply the changes.

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
280 of ANSYS, Inc. and its subsidiaries and affiliates.
Appendices
Extended CSV File Format (p. 281)

Extended CSV File Format


ANSYS DesignXplorer exports and imports CSV files with an extended CSV format. While the files
primarily conform to the usual CSV standard, several non-standard formatting conventions are also
supported.

Overview of the usual CSV standard:

CSV stands for Comma-Separated Values.

The optional header line indicates the name of each column.

Each line is an independent record made of fields separated by commas.

The format is not dependent on the locale, which means that the real number 12.345 is always
written as 12.345, regardless of the regional settings of the computer.

For example:

field_name,field_name,field_name CRLF
aaa,bbb,ccc CRLF
zzz,yyy,xxx CRLF

ANSYS DesignXplorer supports the following extensions of the standard CSV format:

If a line starts with a # character, it is considered to be a comment line instead of a header or data
line and is ignored.

The header line is mandatory. It is the line where each parameter is identified by its ID (P1, P2, ,
Pn) to describe each column. The IDs of the parameters in header line match the IDs of the parameters
in the project.

The first column is used to indicate a name for each row.

A file can contain several blocks of data, with the beginning of each block being determined by a
new header line.

File example:

# Design Points of Design of Experiments


# P1 - ROADTHICKNESS ; P2 - ARCRADIUS ; P3 - MASS ; P4 DE-
FORMATION
Name, P1, P2, P3, P4
1,2,120,9850000,0.224
2,1.69097677,120,9819097.68,0.56671394

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 281
Appendices

3,2.30902323,120,9880902.32,0.072276773
4,2,101.4586062,8366688.5,0.271120983

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
282 of ANSYS, Inc. and its subsidiaries and affiliates.
Design Exploration, 27, 63, 77, 125, 181, 185
Index options, 19
overview, 27
Symbols Parameters Correlation, 51
2D contour graph chart, 104 reports, 210
2D Slices chart, 107 user interface, 6
3D contour graph chart, 105 workflow, 13
design goals
defining, 153
A Design of Experiments, 29
ACT
Six Sigma, 46
external optimizer extensions, 146
design points, 12
Adaptive Multiple-Objective Optimization, 251
adding, 160
Adaptive Single-Objective, 241
automatically saving to parameter set, 196
cache, 197
C failed, 200
cache of design point results, 197 handling failed , 203
candidate points inserting, 197
candidate points results, 173 log files, 198
display, 173 preserving, 202
properties, 174 preventing failures, 200
creating, 160 raw optimization data, 198
custom, 158 update location, 195
editing, 158 update order, 195
retrieving, 159 updating via RSM, 208
verifying, 161 using, 193
viewing, 158 design points vs parameter plot
working with, 157 object reference, 31
constrained design spaces, 230 design values, 191
constraints, 153 direct optimization
contour graphs transferring data, 127
2D, 104 distribution
3D, 105 choosing for random variables, 260
convergence criteria chart, 162 data scatter values, 261
multiple-objective optimization, 163 measured data, 260
single-objective optimization, 164 no data, 262
convergence curves chart types, 263
Kriging, 82 DOE solution, 63
Sparse Grid, 88 domain
correlation input parameters, 149
correlation matrix chart, 34 parameter relationships, 150
correlation scatter chart, 33
correlation sensitivities chart, 34 E
determination histogram chart, 35
error messages, 279
determination matrix chart, 34
extensions
parameters correlation, 31
optimization, 146
cumulative distribution function, 268
G
D genetic algorithm, 246
database
goal driven optimization
raw optimization data, 198
candidate points, 173
resuming, 5
charts, 162
decision support process, 253

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 283
Index

convergence criteria, 162 Chart view, 166


creating custom candidate points, 160 input parameter, 171
creating design points, 160 objective/constraint, 170
creating system, 126 Outline view, 169
creating verification points, 160 parameter relationship, 172
decision support process, 253 sparklines, 169
defining domain, 148
defining input parameters, 149 I
defining parameter relationships, 150 input parameters
defining objectives and constraints, 153 changes to, 185, 190
Direct, 125 continuous, 188
direct optimization, 127 discrete, 187
external optimizers, 146 manufacturable values, 188
history, 165 using, 186
methods, 128
multiple-objective methods, 244 L
(see also Pareto dominance) Local Sensitivity Chart
Adaptive Multiple-Objective Optimization, 251 example, 116
MOGA , 246 local sensitivity charts
MOGA convergence criteria, 245 description, 112
properties display, 113
Adaptive Multiple-Objective , 142 example
Adaptive Single-Objective , 139 Local Sensitivity chart, 120
MISQP, 137 Local Sensitivity chart, 115
MOGA, 132 Local Sensitivity Curves chart, 118
NLPQL, 135 object reference, 39
screening, 130 properties
Response Surface, 125 Local Sensitivity chart, 117
Samples chart, 178 Local Sensitivity Curves chart, 123
sampling method, 232
sensitivities, 175 M
single-objective methods
manufacturable values
Adaptive Single-Objective , 241
changing, 191
MISQP , 240
defining, 188
NLPQL , 233
in Local Sensitivity charts, 112
singlle-objective methods, 233
in Response charts, 102
theory, 224, 229
Meta-Model
best practices, 227
Changing, 90
principles, 225
Goodness of Fit, 89
sampling , 230
meta-model types
tradeoff studies, 176
Kriging, 78
using, 125
neural network, 83
goal driven optimization algorithms
non-parametric regression, 83
theory
Standard Response Surface - Full 2nd-Order Polyno-
Screening, 232
mial , 77
goals, 153
Meta-Models, 77
Goodness of Fit, 89, 92
min-max search, 98
advanced report, 96
MISQP, 240
MOGA, 246
H
histogram , 267 N
history chart, 165
neural network, 83

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
284 of ANSYS, Inc. and its subsidiaries and affiliates.
NLPQL, 233 uncertainty variables, 191
non-parametric regression, 83 parameters correlation, 31
Parameters Correlation, 51
O parameters parallel plot
objectives, 153 object reference, 30
optimization Pareto dominance, 244
creating system, 126 Pending state, 208
direct optimization, 127 points
domain, 148 candidates, 157
input parameters, 149 custom candidates, 158
parameter relationships, 150 postprocessing six sigma analysis results , 267
extensions, 146 cumulative distribution function, 268
downloading, 146 histogram, 267
installing, 146 probability table, 269
loading, 147 sensitivities, 270
project setup, 148 Predicted vs. Observed chart, 95
selecting, 147 probability table, 269
methods, 128 project report, 210
object reference, 40
objectives and constraints, 153 R
properties refinement, 89
Adaptive Multiple-Objective , 142 auto-refinement
Adaptive Single-Objective , 139 Kriging, 79
MISQP, 137 Sparse Grid, 87
MOGA, 132 Kriging, 78
NLPQL, 135 manual, 92, 98
screening, 130 refinement points
using, 125 adding, 160
optimization algorithms Remote Solve Manager, 208
theory reports, 210
Adaptive Multiple-Objective Optimization, 251 Response chart
Adaptive Single-Objective, 241 2D contour graph, 104
MISQP, 240 2D Slices graph, 107
MOGA, 246 3D contour graph, 105
NLPQL, 233 display, 102
options, 19 Goodness of Fit, 92
output parameters object reference, 38
changes to, 185 Response Chart
using, 193 example, 110
properties, 111
P response point, 37
parameter relationships, 150 response points, 13
sampling, 230 adding, 160
parameters, 10, 185 response surface
changes to, 185, 190 Kriging
design variables, 191 auto-refinement properties, 81
input, 186 auto-refinement setup, 79
continuous, 188 convergence curves chart, 82
discrete, 187 meta-model refinement, 89
manufacturable values, 188 object reference, 35
output, 193 Sparse Grid
parameter relationships, 150 Sparse Grid convergence curves chart, 88

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 285
Index

Sparse Grid properties, 87 Local Sensitivity chart, 115


Response Surface Algorithms Local Sensitivity Curves chart, 118
Box-Behnken Design, 215 object reference, 45
Central Composite Design, 214 Six Sigma, 49, 182
Kriging, 217 six sigma analysis, 270
Non-Parametric Regression, 218 Six Sigma analysis, 48, 181
Sparse Grid, 221 object reference, 46
Standard Response Surface Full 2nd-Order Poly- postprocessing, 181
nomial, 216 statistical measures, 182
response surface charts, 99 using, 181
Goodness of Fit, 92 six sigma analysis, 257
local sensitivities, 112 choosing variables, 260
display, 113 guidelines, 260
Local Sensitivity chart, 115 postprocessing, 267
Local Sensitivity Curves chart, 118 sample generation, 267
Predicted vs. Observed chart, 95 sensitivities, 270
Response chart, 101 theory, 271
2D contour graph, 104 understanding, 258
2D Slices graph, 107 Six Sigma analysis postprocessing, 181
3D contour graph, 105 charts, 182
Spider charts, 112 sensitivities, 182
response surface types tables, 181
Kriging, 78 solution
neural network, 83 DOE, 63
non-parametric regression, 83 sparklines, 169
Sparse Grid, 84 Sparse Grid, 84
Standard Response Surface - Full 2nd-Order Polyno- Spider chart
mial , 77 object reference, 39
responses Using, 112
min-max search, 98
overview, 77 T
Tables, 205
S copy and paste, 207
sample generation editable output parameters, 206
Adaptive Multiple-Objective , 142 exporting data, 207
Adaptive Single-Objective , 139 importing data from CSV, 207
MISQP, 137 Tradeoff chart
MOGA, 132 object reference, 44
NLPQL, 135 tradeoff studies, 176
screening, 130, 232 troubleshooting, 279
six sigma analysis, 267
samples U
Samples chart uncertainty variables , 191
properties, 178 choosing, 260
Samples chart choosing distribution, 260
object reference, 45 response surface analyses, 260
Screening, 232 update
sensitivities chart design point update location, 195
description, 204 design point update order, 195
global, 45, 49, 182 user interface, 6
goal driven optimization, 45, 175
local, 112

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
286 of ANSYS, Inc. and its subsidiaries and affiliates.
V
Verification Points, 96
Verify
candidate points, 161

W
workflow, 13

Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 287
Release 15.0 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
288 of ANSYS, Inc. and its subsidiaries and affiliates.

Potrebbero piacerti anche