next up previous
Next: Preconditioners for Finite Dimensional Up: The Main Idea Previous: The Main Idea

Constructing The Preconditioner from Its Symbol

We come now to the question of constructing the preconditioner from its symbol. Since we are only interested in acceleration of certain numerical procedure, it is enough to use approximations for the true Hessians. Let us begin with the simplest examples. We have seen the correspondence between differential operators and symbols,
$\displaystyle \mbox{\tt Symbol: } i k_j \qquad\qquad\qquad \mbox{\tt Operator: }\frac{\partial } { \partial x_j} \qquad j=1, \dots, 3$     (9)

and therefore
$\displaystyle \mbox{\tt Symbol: }(i k_j) ^m \qquad\qquad\qquad \mbox{\tt Operator: }\frac{\partial ^m } {\partial x_j^m} \qquad j=1, \dots, 3$     (10)

Polynomials in $i k_j$, even in several dimensions, correspond to differential operators which are easily found as was shown in a previous lecture.



Example I. Consider the problem given in example V in lecture no. 2.

\begin{eqnarray*}
\min_{\alpha} \frac{1}{2} \int _{\partial\Omega} (u - u^*)^2 dx
\end{eqnarray*}



subject to

\begin{eqnarray*}
\left( \begin{array}{cr} \beta^2 \frac{\partial }{\partial x}&...
...( \begin{array}{c} 0 \\ 0 \end{array}\right) \qquad\qquad \Omega
\end{eqnarray*}



with the boundary condition

\begin{eqnarray*}
v = \frac{\partial \alpha}{\partial x} \qquad & \mbox{\rm$\partial\Omega$},
\end{eqnarray*}



where $\beta^2 = (1 - M^2)$ and $\Omega =\{(x,y)\vert y > 0\} $. It was shown there that $\hat{\cal H}({k}) = \frac{\vert k\vert ^2}{\beta ^2}$. This implies that
$\displaystyle {\cal H}_{approx} = - \frac{1}{\beta ^2} \frac{d^2}{dx^2}$     (11)

An effective preconditioner ${\cal R}$ must satisfy $ \hat{\cal R} (k) = \frac{\beta ^2}{\vert k\vert^2}$ for large $\vert k\vert$ and this is obtained for
$\displaystyle {\cal R}^{-1} = \mu I - \frac{1}{\beta ^2} \frac{d^2}{dx^2}$     (12)

The addition of the operator $\mu I$ was to ensure that the preconditioner does not affect the low frequency range. A choice $\mu=1$ can be taken although some approximation of the first eigenvalue can give a better choice. Thus, the implementation of a preconditioned iteration for that problem consist of repeated application of the two steps
$\displaystyle \begin{array}{l}
\mu \psi -\frac{1}{\beta ^2} \frac{d^2}{dx^2} \psi = - \lambda _x \\
\alpha \leftarrow \alpha - \delta \psi
\end{array}$     (13)

where $\delta$ is found using a line search on coarse grids and, $\delta = 1$ on fine grids. Note that the construction of the preconditioner was done on the differential level but the numerical implementation is using some approximation of it, e.g., finite difference approximation.

A good discretization (h-elliptic) of the state equation uses staggered grid. We demonstrate it on a rectangular domain with a uniform grid of spacing $h$. Let the grid points be labeled $\{ (i,j) \vert 0 \leq i \leq N_x; 0 \leq j \leq N_y \}$. The discrete variables approximating $u$ will be located at the middle of the vertical cell edges, i.e., will be parameterized as $u _{i,j+1/2}$. The discrete approximations to $v$ will be located at the middle of the horizontal edges, i.e., parameterized by $v _{i+1/2, j}$. Discretization of the first equation is done at the cell centers and the second equations at the vertices, both using central differences. Design variables are located at the boundary nodes and the boundary condition is given by

$\displaystyle v _{i+1/2,0} = \frac{1}{h} ( \alpha _{i+1} - \alpha _i ).$     (14)

A calculation of the cost functional for the discrete problem requires the values of $u$ on the boundary $j=0$. This is done by introducing ghost variables $u _{i+1/2, -1/2}$. An extra equation for these ghost values is introduced at the boundary nodes, approximating the second interior equation. We introduce adjoint variables (Lagrange multipliers) $(\lambda, \mu)$ discretized as $\lambda _{i+1/2,j+1/2}$ and $\mu _{i,j}$ with ghost points for $\lambda$. The adjoint variables satisfy the same equation as $(u,v)$ but at points shifted by $(1/2,1/2)$. A straightforward calculation shows that the gradient is given by
$\displaystyle \frac{1}{2h} ( \lambda _{i+1/2, 1/2} - \lambda _{i-1/2, 1/2} +
\lambda _{i+1/2, -1/2} - \lambda _{i-1/2, -1/2}).$     (15)

The discrete preconditioner is done as follows. Let $\psi _j, j=1, \dots, N$ be the solution of the discrete problem
$\displaystyle \begin{array}{ll}
\beta ^2 h^2 \mu \psi _j
-\psi _{j-1} + 2 \psi ...
...i-1/2, 1/2}\\  & +
\lambda _{i+1/2, -1/2} - \lambda _{i-1/2, -1/2})
\end{array}$     (16)

for $j=2,\dots, N-1$, where $h$ is the mesh size used for the discretization and $\psi _0 = \psi _N = 0$. The design variables are updated by
$\displaystyle \alpha _j = \alpha _j - \delta \psi _j \qquad j = 1, \dots , N$     (17)

Note that applying the preconditioner requires the solution of a differential equation on the boundary where the control is given. This is a typical case. The equation defining the preconditioner is in one dimension less than the state and the costate equations.



Example II. We now move to a more challenging case which is the construction of an approximation to a Hessian with a symbol $\hat{\cal H} ( {\bf k} ) = \vert{\bf k}\vert $, and the problem is on the boundary of a domain in three space dimensions. Recall that in our lecture no 2 in this volume we have discussed the mapping


$\displaystyle \phi _{\vert\Gamma} = T \frac{\partial \phi}{\partial n}_{\vert\Gamma},$     (18)

where $\phi$ is the solution of a Laplace equation in the domain $\Omega$
$\displaystyle \Delta \phi = 0,$     (19)

and we have found that its symbol is
$\displaystyle \hat{T} ({\bf k}) = \vert{\bf k}\vert.$     (20)

The construction of an operator $T$ from functions defined on the boundary of a domain, to functions defined on the same boundary, whose symbol is $\vert{\bf k}\vert$ is done as follows. Let $g$ be a function defined on the boundary of $\Omega$, we define $Tg $ by

$\displaystyle Tg = \frac{\partial \phi}{\partial n}_{\vert{\partial\Omega}},$     (21)

where
$\displaystyle \begin{array}{lr}
\Delta \phi = 0 & \Omega \\
\phi = g & \partial\Omega .
\end{array}$     (22)

Another case we consider is an operator $S$ whose symbol is

$\displaystyle \hat S({\bf k}) = \frac{1}{\vert{\bf k}\vert}.$     (23)

It can be approximated as
$\displaystyle S g = \phi _{\vert _{\partial\Omega}}$     (24)

where $\phi$ is the solution of
$\displaystyle \begin{array}{lr}
\Delta \phi = 0 & \Omega \\
\frac{\partial \phi}{\partial n}= g & \partial\Omega
\end{array}$     (25)

This follows from certain relations that we obtained in a previous lecture.



Example III. Here we construct an operator whose symbol is $i (a k_1 + b k_2)/\vert{\bf k}\vert $. We have a product of symbols, and each of them is something that we already know. A product of symbols correspond to applying the corresponding operators one after the other (with the proper order for systems of differential equations).

The symbol $i (a k_1 + b k_2) $ correspond to the operator $a \frac{\partial }{\partial t_1} + b \frac{\partial }{\partial t_2}$ where $t_1, t_2$ are the tangential coordinate corresponding to the wave directions $k_1, k_2$ respectively. Let $\phi$ be the solution of (25) then,

$\displaystyle T g = (a \frac{\partial }{\partial t_1} + b \frac{\partial }{\partial t_2}) \phi _{\vert _{\partial\Omega}}$     (26)

has the desired symbol.



Remark: The operators that we have constructed in the last example are nonlocal, and one may construct also integral operators for them, with singular kernels. We prefer this approach since in the context of the optimal design problems one already has a (fast) solver for the equations needed for these pseudo-differential operators.


next up previous
Next: Preconditioners for Finite Dimensional Up: The Main Idea Previous: The Main Idea
Shlomo Ta'asan 2001-08-22