Sparse optimization is an important technique in data and information sciences. The idea is to optimize the fit of a model to a dataset using the fewest number of terms. Sparsity can be enforced in a convex or non-convex manner, leading to many interesting approaches. My research involves the use of sparse optimization for partial differential equations. Some goals include sparse approximations to evolution and complex dynamics, reduced-order modeling and dimensional reduction, efficient numerical solvers, and structural analysis.
Nonlinear partial differential equations are fundamental models in many fields of mathematics, engineering, and sciences. Except for very simple cases, numerical approximations are the only tool scientists have to solve nonlinear problems. Numerical methods for nonlinear PDE are challenging due to complicated structures which do not allow the use of standard techniques. Some examples include complex constraints, multi-scale behavior, loss of regularity, and/or discontinuities. Several problems I work on include constructing fast iterative solvers for nonlinear problems, numerical methods for complex problems, and data-driven methods.
Ill-posed inverse problems make up a large portion of the mathematical problems in imaging science. During image acquisition and storage, issues of data corruption or data loss is common. Reversing this process to recover the original data may not be possible, since the forward model is not invertible. My primary work in imaging focuses on developing (regularized) variational models to construct well-posed inverse methods. The problems I work on include cartoon-texture decomposition, inpainting, denoising, image analysis, and segmentation.