next up previous
Next: Constrained Optimization Up: Review of The Basics: Previous: Review of The Basics:

Unconstrained Optimization

Consider the unconstrained minimization problem

$\displaystyle \min_{\alpha} E(\alpha)$     (1)

where $\alpha = ( \alpha _1 , \dots, \alpha _q )$. We refer to $\alpha$ as the design variable and to $E(\alpha)$ as the cost functional. A change in the design variables by $\tilde \alpha$ introduces a change in the functional which can be written as
$\displaystyle \delta E \equiv
E(\alpha + \epsilon \tilde \alpha ) - E(\alpha) =...
...rac{1}{2} \epsilon ^2 \tilde \alpha ^T {\cal H} \tilde \alpha + O(\epsilon ^3).$     (2)

Here $(\nabla E)^T = ( \frac{\partial E}{\partial \alpha _1}, \dots , \frac{\partial E}{\partial \alpha _q} )$ and ${\cal H}$ stands for the Hessian, i.e., the matrix of second derivatives of E. We assume the Hessian ${\cal H}$ is positive definite, i.e., $\alpha ^T {\cal H} \alpha > 0$ for all $\alpha \neq 0$ to guarantee a unique minimum. For small $\epsilon$ we can neglect second order terms and higher in $\epsilon$ and see that a choice of $\tilde \alpha = - \nabla E$ result in a reduction of the functional, that is,
$\displaystyle E(\alpha - \epsilon \nabla E ) - E(\alpha) = - \epsilon \Vert \nabla E \Vert ^ 2 + O(\epsilon ^2).$     (3)

This is the basis for the steepest descent method and other gradient based methods. The gradient $\nabla E$ of the functional to be minimized can be easily computed for this case, say, by finite differences. At a minimum the following equations hold,
$\displaystyle \mbox{\tt Optimality Condition:} \qquad \frac{\partial E}{\partial \alpha _j} = 0 \qquad \qquad j=1, \dots, q.$     (4)

These equations are called the (first order) necessary conditions for the problem.


next up previous
Next: Constrained Optimization Up: Review of The Basics: Previous: Review of The Basics:
Shlomo Ta'asan 2001-08-22