Research

 

With increasing availability of large volumes of data, both the boundaries of what is deemed possible and the expectations of what should be possible have shifted and in turn created the demand for better quality information extraction. This often translates to the need to recover signals of interest from observations, which can be incomplete, indirect, and/or corrupted by noise, outliers or interference. What’s more, the desired information may be characterized by high dimensions, e.g., high-resolution images or signals in massive MIMO. All this makes the corresponding inverse problems very poorly conditioned.

I am interested in rigorous methodologies that are able to extract useful signals in ways that are computationally efficient, statistically optimal and robust. My work involves both design of new inference techniques, as well as the analysis of the performance of existing techniques. What motivates such theoretical investigations is the fact that their outcomes commonly lead to improved system performance since they allow the engineer to fine-tune the involved system parameters.

In the rest of this page you can find more information and references on research projects that are representative of my work. Of course, the corresponding papers is always the most accurate and complete sourse of information on any of these project. For a quick read, I recommend that you take a look at the few representative slides that I have prepared for each one of them.

Performance of convex optimization in high-dimensional structured signal recovery

We consider the problem of inferring structured high-dimensional signals from noise corrupted and limited linear measurements. Without exploiting additional structural properties, it would be impossible to infer information from fewer observations than unknowns. Thus, with the advent of compressed sensing theory the idea that has been popularized over the past decade or so is to use convex-relaxation techniques that are capable of taking advantage of the problem structure in computationally efficient ways. In general, such methods minimize a weighted sum of a convex loss function and a convex regularizer, which aim to promote noise and signal structure, respectively. However, this poses a number of questions: how to optimally choose the loss function and the regularizer with respect to the structural information? How to weigh them? How many measurements suffice? How robust is the estimate to outliers? In principle, one can answer all these questions once a precise characterization of the inference performance as a function of all the problem parameters exists.

Combining tools from convex analysis, Gaussian process theory and high-dimensional probability we develop a general framework that fully addresses the challenge of precise characterizations when the measurement vectors are Gaussians; a prototypical setting in compressed sensing theory. Our analysis is based on a new tight comparison theorem of Gaussian processes (see CGMT for details), which serves as the main building block around which we built a powerful framework that applies under general settings on the loss and regularizer functions, and, which is informative about a wide class of different performance metrics (e.g., mean-square-error, support recovery probability, etc.).

We further consider extensions to the problem of fitting data to nonlinear link functions. In general, shis can lead to computationally intractable optimization programs. Thus, practitioners often resort to entirely ignoring the presence of the nonlinearity and instead employ inference methods tailored to linear models such as least-squares and the LASSO. In light of this common practice the question arises: how well does this heuristic perform? Posed differently, this question also asks: how robust is the LASSO to model misspecifications? We provide an exact answer to these questions for Gaussian matrices. In particular, we show that the LASSO treats the nonlinearity precisely as if it was additive Gaussian noise – with an effective noise variance parameter that is a simple function of the nonlinearity.

Quantization of high-dimensional data

In the problem of structured signal recovery from high-dimensional linear observations, it is commonly assumed that full-precision measurements are available. Under this assumption, the recovery performance of the popular Generalized Lasso (G-Lasso) is by now well-established. In this paper, we extend these types of results to the practically relevant settings with quantized measurements. We study two extremes of the quantization schemes, namely, uniform and one-bit quantization; the former imposes no limit on the number of quantization bits, while the second only allows for one bit. In the presence of a uniform dithering signal and when measurement vectors are sub-gaussian, we show that the same algorithm (i.e., the G-Lasso) has favorable recovery guarantees for both uniform and one-bit quantization schemes. Our theoretical results, shed light on the appropriate choice of the range of values of the dithering signal and accurately capture the error dependence on the problem parameters. For example, our error analysis shows that the G-Lasso with one-bit uniformly dithered measurements leads to only a logarithmic rate loss compared to the full-precision measurements.

Phase-retrieval via linear programming

We study algorithms for solving quadratic systems of equations based on optimization methods over polytopes. We study a convex formulation of the phase retrieval problem, which estimates the unknown signal by solving a simple linear program over a polytope constructed from the measurements. We present a sharp characterization of the high-dimensional geometry of the aforementioned polytope under Gaussian measurements. This characterization allows us to derive asymptotically exact performance guarantees for the algorithm, which also reveal a phase transition phenomenon with respect to its sample complexity. Moreover, the geometric insights gained from our analysis lead to a new nonconvex formulation of the phase retrieval problem and an accompanying iterative algorithm, which we call PhaseLamp. We show that this new algorithm has superior recovery performance over the original PhaseMax method. Finally, as yet another variation on the theme of performing phase retrieval via polytope optimization, we propose a weighted version of PhaseLamp and demonstrate, through numerical simulations, that it outperforms several stat eof-the-art algorithms under both generic Gaussian measurements as well as more realistic Fourier-type measurements that arise in phase retrieval applications.

Provably efficient decoding in Massive MIMO wireless networks

The maximum-likelihood (ML) decoder for symbol detection in large multiple-input multiple-output wireless communication systems is typically computationally prohibitive. In this work, we study a popular and practical alternative, namely the Box-relaxation optimization (BRO) decoder (and its variations), which is a natural convex relaxation of the ML. For iid real Gaussian channels with additive Gaussian noise, we obtain exact asymptotic expressions for the symbol error rate (SER) of the BRO. The formulas are particularly simple, they yield useful insights, and they allow accurate comparisons to idealistic schemes such as the matched filter bound, as well as, to conventional linear decoders such as the zero-forcing decoder. Importantly, the derived formulas can be used to answer important system-design questions, such as: How many receiver antennas are needed to meet system qualifications? What constellation type to use? How sub-optimal are these decoders?

We further consider operation of such systems under power constraints which require the use of low-end hardware at the receiver side, such as low-resolution analog-to-digital converters. We propose the use of the BRO and we show that the error performance quickly approaches that of full-resolution measurements when the number of quantization levels is as low as four bits.

Computational imaging: Non-line-of-sight imaging and lensless cameras

Imagine trying to image a hidden room through an open doorway, or wanting to prevent collisions by being able to timely detect incoming vehicles at a crossroad. These are instances of the problem of non-line-of-sight (NLOS) imaging. NLOS imaging systems differ from classical imaging systems, since they only have indirect access to the signal of interest via reflections from intermediary objects (e.g., walls, floors, etc.). Typical surfaces of these objects diffusely reflect light, effectively removing beam orientation before they reach the camera, thus, rendering the inverse problem very poorly conditioned. In order to compensate for the losses induced by diffused reflections and turn the problem into a well-conditioned one, our wok develops a computational NLOS imaging technique that exploits natural occluders. We formulate a forward light propagation model that accounts for and exploits the structure that the occluders introduce in the measurements. We combine this with computational routines that use regularization to also promote image priors. Thus, we are able to reveal images of hidden scenes while eliminating the need for highly-specialized ultrafast time-of-flight cameras required in previous techniques. In our subsequent work, we achieve even better reconstruction quality by modifying the computational routine according to the noise structure in low-photon regimes, i.e., Poisson statistics.