## Structured Matrix Nearness Problems:Theory and Algorithms

In many areas of science one often has a given matrix, representing forexample a measured data set and is required to find a matrix that isclosest in a suitable norm to the matrix and possesses additionally a structure,inherited from the model used or coming from the application. We call these problems structured matrix nearness problems. We look at three different groups of these problems that come from real applications, analyze theproperties of the corresponding matrix structure, and propose algorithms to solve them efficiently.The first part of this thesis concerns the nearness problem of finding the nearest $k$ factor correlation matrix $C(X) =\diag(I_n -XX^T)+XX^T$ to a given symmetric matrix, subject to natural nonlinearconstraints on the elements of the $n\times k$ matrix $X$, where distance is measured in the Frobenius norm.Such problems arise, for example, when one is investigating factor models ofcollateralized debt obligations (CDOs) or multivariate time series. We examineseveral algorithms for solving the nearness problem that differ in whether or not they can takeaccount of the nonlinear constraints and in their convergence properties. Our numerical experimentsshow that the performance of the methods depends strongly on the problem, butthat, among our tested methods, the spectral projected gradient method is the clear winner.In the second part we look at two two-sided optimization problems where thematrix of unknowns $Y\in\R^{n\times p}$ lies in the Stiefel manifold. These two problems come from an application in atomic chemistry where one is looking for atomic orbitalswith prescribed occupation numbers. We analyze these two problems, propose ananalytic optimal solution of the first and show that an optimal solutionof the second problem can be found by solving a convex quadratic programmingproblem with box constraints and $p$ unknowns. We prove that the latter problem can be solved by the active-set method in atmost $2p$ iterations. Subsequently, we analyze the set of optimal solutions$\mathcal{C}=\{Y\in\R^{n\times p}:Y^TY=I_p, Y^TNY=D\}$ of the firstproblem for $N$ symmetric and $D$ diagonal and find that a slight modification of it is a Riemannian manifold. Wederive the geometric objects required to make an optimization over thismanifold possible. We propose an augmented Lagrangian-based algorithm that uses these geometric tools andallows us to optimize an arbitrary smooth function over $\mathcal{C}$. This algorithm can be used to select a particular solutionout of the latter set $\mathcal{C}$ by posing a new optimization problem. We compareit numerically with a similar algorithm that,however, does not apply these geometric tools and find that our algorithm yields better performance.The third part is devoted to low rank nearness problems in the $Q$-norm, where thematrix of interest is additionally of linear structure, meaning it lies inthe set spanned by $s$ predefined matrices $U_1,\ldots, U_s\in\{0,1\}^{n\timesp}$. These problems are often associated with model reduction, for example in speechencoding, filter design, or latent semantic indexing. We investigate three approaches that support any linearstructure and examine further the geometric reformulation by Schuermans etal.\ (2003). We improve their algorithm in terms of reliability byapplying the augmented Lagrangian method and show in our numerical tests thatthe resulting algorithm yields better performance than other existing methods.