next up previous
Next: Full Multigrid Up: One-Shot Multigrid Methods Previous: Design Space of Moderate

Infinite Dimensional Design Space

The most challenging case is the infinite dimensional design case. It include all the possible difficulties one may encounter in an optimization problem. Our main goal here is to construct a relaxation that will smooth the design variables. That is, make high frequency changes in the design variables on fine grids, and low frequency changes on coarse grids. On each grid we want to update the design variables only in the scales that are high frequency for that level. In this way we split the design process on all levels of discretization. These ideas were developed by Arian and Ta'asan in [1],[2],[3].

In order to look for changes in the design variables we use the gradient, which is calculated using the adjoint method. Given an error in the design variables, one may ask the question how does the corresponding gradient look like. That is, suppose that our error is mainly a high frequency. Is it true or not that also the gradient will be dominated by high frequencies. The answer to this is given by the Hessian and we know how to analyze it using Fourier techniques as discussed in the previous lectures.

We consider example I of section 4.1. For the quasi-elliptic discretization we know that no local relaxation (i.e., a relaxation based on the gradient) can have the smoothing property, since the gradient does not 'feel' the oscillatory parts in the errors of the design variables. Hence such errors cannot be corrected efficiently by using the gradient. The h-elliptic discretization in that example had a symbol $\mu _c ^2 (\theta )$, and if we consider the expression

$\displaystyle 1 - \delta \hat {\cal H}^h (\theta ) = 1 - \delta \mu _c ^2 (\theta )$     (47)

it is easy to see that by choosing, for example,
$\displaystyle \delta = 2./[\hat {\cal H}^h (\pi /2) + \hat {\cal H}^h (\pi) ]$     (48)

one achieve a good reduction for all high frequencies. It is important to notice that the low frequencies are not changes much in this relaxation since $\hat {\cal H}^h (\theta )$ vanishes for $\theta = 0$.

In solving equations it is not so important to keep the low frequencies unchanged in the relaxation. It just gives the coarse levels an easier job. Here is it crucial not to introduce low frequency changes on the fine grids. The reason is that the corresponding effect on the state and costate has to be computed. When these changes are smooth this is not an inexpensive computation. Only the high frequency effect can be computed fast, due to the smoothing and to its localization to a vicinity of the boundary in elliptic problems.

In example II, the quasi-elliptic case does not have an efficient treatment, as was in example I. The h-elliptic discretization in that example had a symbol

$\displaystyle \hat {\cal H}^h (\theta ) = \frac{ \sin ^2 ( \theta /2) } { \mu _c^2 (\theta ) }$     (49)

with the property
$\displaystyle 0 < c_1 \leq \hat {\cal H}^h (\theta ) \leq c_2 \qquad \qquad \vert \theta \vert \leq \pi$     (50)

Thus one can choose
$\displaystyle \delta = 2/(c_1 + c_2)$     (51)

and get a relaxation that converge in a rate independent of the number of design variables. This may seem very good, however, as can be seen the Hessian does not filter out the low frequencies as was in the previous example. This implies that the change in the state and adjoint variables, as a result in the change in design variable can no longer be calculated efficiently. The total efficiency of a multigrid method based on this relaxation will be far from optimal. Some modification of the gradient descent direction has to be introduced here to achieve optimal performance. This lead us to the general treatment.

Preconditioners for Multigrid Relaxation. To overcome difficulties such as in the last example, we introduce a preconditioner that is applied to the gradient before using it to define a direction of change for the design variables. Call that preconditioner ${\cal R}$, with a symbol $\hat {\cal R} (\theta ) $. Our task is to choose $\hat {\cal R}^h (\theta )$ such that

$\displaystyle \hat {\cal R}^h (\theta ) \hat {\cal H}^h (\theta ) \geq C \sum _j \sin ^{\gamma} (
\theta _j /2)$     (52)

with $\gamma > 0$. To achieve this we need to analyze the discrete symbol $\hat {\cal H}^h (\theta )$. This however, may be a difficult task in general and an alternative approach is adopted.

The idea is to analyze the behavior of $\hat{\cal H}({\bf k})$ and $\hat {\cal R} ({\bf k})$ for the differential problem, since in that case the computation is much simpler. This is followed by a discretization that does not violate h-ellipticity leading to a proper choice for $\hat {\cal R} (\theta ) $.

We therefore construct $\hat {\cal R} ({\bf k})$ such that

$\displaystyle \hat {\cal R} ({\bf k}) \hat {\cal H} ({\bf k}) \geq \vert {\bf k} \vert ^\gamma$     (53)

with $\gamma > 0$. For practical application a $\gamma =2$ yields good smoothing properties. The smoothing of the problem is then given by considering the preconditioned relaxation
$\displaystyle \alpha \leftarrow \alpha - \delta {\cal R} g$     (54)

where $g$ is the gradient and $\delta$ is found by a line search on coarse levels and by Fourier analysis for fine grids. Fourier analysis of iteration matrix for that relaxation is
$\displaystyle 1 - \delta \hat {\cal R}^h (\theta ) \hat {\cal H}^h (\theta ).$     (55)

A choice for $\delta$ for fine grids is easily calculated as
$\displaystyle \delta = 2/ [{\cal R}^h (\pi/2)\hat {\cal H}^h (\pi/2) + {\cal R}^h (\pi)\hat {\cal H}^h (\pi)].$     (56)

Notice that the relaxation does not requires a line search for the fine grids. The coarsest grids for which the analysis of the Hessian is not accurate enough uses a standard BFGS.

For example I of section 4.1 with the h-elliptic discretization we obtain that the preconditioner ${\cal R}^h$ should have a symbol

$\displaystyle {\cal R}^h (\theta ) = \frac{h^2}{4 \sin ^2 (\theta /2)},$     (57)

which means that ${\cal R}^h$ is the inverse of $T^h$ given by
$\displaystyle (T^h g) _k = \frac{1}{h^2} ( g_{k+1} - 2 g_k + g_{k-1} )$     (58)

next up previous
Next: Full Multigrid Up: One-Shot Multigrid Methods Previous: Design Space of Moderate
Shlomo Ta'asan 2001-08-22