next up previous
Next: Quasi-Newton Methods Up: Review of The Basics: Previous: Condition Number of the

Gradient Calculation: Constrained Optimization

Black Box Methods are the simplest approach to solve constrained optimization problems and consist of calculating the gradient in the following way. Let $\delta E$ be the change in the cost functional as a result of a change $\tilde \alpha$ in the design variables. The following relation holds

$\displaystyle \delta E = \tilde \alpha^T E _\alpha + \tilde \alpha ^T U_\alpha ^T E_U$     (27)

where $U _\alpha$ are the partial derivatives of $U$ with respect to the design variables, also termed as sensitivity derivatives. All quantities in this expression are straightforward to calculate except $U _\alpha$. The dimensionality of this quantity is the dimension of $U$ times the dimension of $\alpha$.

The calculation of $U _\alpha$ is done in this approach using finite differences. That is, for each of the design parameters $\alpha_j$ in the representation of $\alpha$ as $\alpha = \sum_{j=1}^q \alpha_j e_j$, where $e_j$ are a set of vectors spanning the design space, one perform the following process

ALGORITHM: Black-Box Gradient Calculation

Once the above process is completed for $j=1, \dots, q$, one combines the result into


$\displaystyle U^T_\alpha= (\frac{\partial U}{\partial e_1}, \dots,\frac{\partial U}{\partial e_q})$     (28)

which is used in calculation of the gradient
$\displaystyle \nabla E = E_\alpha + U_\alpha ^T E_U.$     (29)

Since in practical problems the dimension of $U$ may be thousands to millions, the feasibility of calculating gradients using this approach is limited to cases where the number of design variables is very small.



The Adjoint Method is an efficient way for calculating gradients for constrained optimization problems even for very large dimensional design space. The idea is to use the expression for the gradient as appears in (18). Thus, one introduces into the solution process an extra unknown, $\lambda$, which satisfies the adjoint equation (13).

A minimization algorithm is then a repeated application of the following three steps.



ALGORITHM: Adjoint Method


next up previous
Next: Quasi-Newton Methods Up: Review of The Basics: Previous: Condition Number of the
Shlomo Ta'asan 2001-08-22