Center for                           Nonlinear Analysis
CNA Home People Seminars Publications Workshops and Conferences CNA Working Groups CNA Comments Form Summer Schools Summer Undergraduate Institute PIRE Cooperation Graduate Topics Courses SIAM Chapter Seminar Positions Contact

## 2012 Summer Undergraduate Applied Mathematics Institute

### Projects:

• Combinatorics on Words, Hrothgar, Michael Druggan, Archit Kulkarni
Abstract: According to Irina the main points of the project:
• Familiarize yourselves with the notion of subword complexity of finite and infinite words and its significance. Read several important papers on the subject.
• Analyze in detail the Sturmian and de Bruijn words, which are the words with the highest and lowest subword complexity.
The group expanded the known list of necessary conditions for a vector to be a subword complexity sequence of a finite word. They worked on a conjecture about the asymptotic behavior of the number of distinct subword complexity sequences by Matt Green and Arash Enayati. While certain recurrence patterns in the sequence have been observed and proved, this work in still in progress.

• A First look at boundary value problems, Emmalyne Wyatt, Jessica Jennings
Abstract: In this project, E. Wyatt and J. Jennings studied some boundary value problems in one dimension such $u'' + u = g\;\; u(0) = u(1) = 0\;\;\;\; (1)$ They tried to understand the difference between this kind of problems and ODEs and then studied (1) by two methods. First, they computed explicit solutions using Fourier methods taking as an example the tent function $g(x) = \left\{ \begin{array}{ll} x & \mbox{if 0 \leq x \leq \frac{1}{2} }\\ 1- x & \mbox{if  \frac{1}{2} \leq 1} \end{array} \right.$ After that they proved also existence of solutions using functional analysis. For this they learned about Hilbert spaces with emphasis on weak convergence, Riesz Theorem, orthonor- mal basis (and made the link with the Fourier method). They also learned about the space of square integrable functions $L^2$ and the Sobolev space $H^1,$ getting familiar with the notion of weak derivatives. They then derived a weak formulation of (1) and solved it using both the Riesz representation Theorem and by proving existence of minimizers of the functional $J(u) = \frac{1}{2} \int_0^1 [(u')^2 + u^2] {\mbox dx} - \int_0^1 (gu) {\mbox dx}$ by the Direct Method of the Calculus of Variations. Then they used both the Fourier method and the abstract method to study the convergence when $\epsilon \rightarrow 0$ of the solutions $u_{\epsilon}$ of the problem $u'' + u = g_{\epsilon}\;\; u(0) = u(1) = 0$ where $g_{\epsilon}=g(frac{x}{\epsilon})$ with $g$ a periodic function. They showed that $u_{\epsilon}$ strongly converges to the solution of $u'' + u = \bar{g}\;\; u(0) = u(1) = 0$ where $\bar{g}= \int_0^1 g\; dx.$
Abstract: In this project, J. Lee and N. Scavilla studied two methods for denoising images. They First concentrated on the Tychonov method where a corrupted image $g$ is denoised by considering the minimization problem ${\mbox min}\int_{[0,1]^2} |\bigtriangledown u|^2 +\frac{\lambda}{2}\int_{[0,1]^2} (u-g)^2.\;\;\; (1)$ They derived the Euler Lagrange equation for this problem, which is $-\bigtriangleup u+\lambda(u-g)=0$ and understood the associated (Neumann) boundary conditions. They then solved this equation both using discrete differences and Fourier methods. They studied the effect of the parameter $\lambda$ on the solution and showed that when considering the one dimensional analog with for example $g=\chi_{[\frac{1}{2}]}$ then the denoised signal is always smooth meaning that this model can not capture the edges. To overcome this problem, they considered the Rudin-Osher-Fatemi model ${\mbox min}\int_{[0,1]^2} |\bigtriangledown u|^2 +\frac{\lambda}{2}\int_{[0,1]^2} (u-g)^2.$ involving the total variation. They showed in one dimension that for some values of $\lambda$, no smooth solution can exist and studied the definition and basic properties of functions of bounded variation again in one dimension. They then focused on the discretized problem ${\mbox min}\Sigma \sqrt{(u_{i+1,j}-u_{i,j})^2 +(u_{i,j+1}-u_{i,j})^2} +\frac{\lambda}{2}\Sigma (u_{i,j}-g_{i,j})^2.$ Since this is a convex but nonsmooth problem, in order to derive the Euler Lagrange equation of this problem, they studied some convex analysis with particular emphasis on the notions of subgradient and convex conjugate. Using these tools, they were able to rewrite the Euler Lagrange equation as $u= g-\pi_{\lambda K}(g)$ where $\pi_{\lambda K}(g)$ is the projection of $g$ on the convex set $\lambda K=\{{\mbox div}p : |p_{i,j}\leq \lambda.$ They computed this projection by using a semi-implicit gradient descent algorithm proposed by A. Chambolle. They ran a large number of numerical examples comparing the Tychonov and the ROF models.