COLLOQUIUM

3:00 p.m., Friday (October 10, 2008)

MATX 1100

Ozgur Yilmaz
UBC

Sparse recovery via non-convex optimization

Abstract: The sparse recovery problem has received a lot of attention, both because of its role in transform coding with redundant dictionaries, and more importantly because it has inspired "compressed sensing". The main questions are:

Suppose we have a "sparse" vector x in R^n (i.e., x has only a few non-zero entries). Given b=Ax where A is a short matrix (m-by-n with m<n), can we recover x? Under what conditions on A? In a computationally tractable way? If there is noise? If x is not sparse, but can be well-approximated by a sparse vector (i.e., x is "compressible")?

It has been recently shown by Candes-Romberg-Tao and independently by Donoho that one can recover estimates of sparse and compressible signals from an "incomplete" set of noisy measurements via one-norm minimization methods under certain conditions on the "measurement matrix" A. These conditions are satisfied (asymptotically optimally) when the matrix A is a random matrix whose entries are drawn i.i.d. from a Gaussian distribution.

In this talk, we present the theoretical recovery guarantees obtained when decoding by p-quasinorm minimization with 0<p<1 in the setting described above, and we prove that the corresponding guarantees are significantly better than those one can obtain in the case of one-norm minimization. In particular, we show that decoders based on p-quasinorm minimization are (l²,l^p) instance optimal. Moreover, these decoders are (l²,l²) instance optimal in probability (this latter relates to a result on Banach-Mazur distances of p-convex bodies to their convex hulls). Finally, we comment on algorithmic issues.

This is joint work with Rayan Saab.

Refreshments will be served at 2:45 p.m. (Math Lounge, MATX 1115).



Copyright © 2008 UBC Mathematics Department