## Research
In the rest of this page you can find more information and references on research projects that are representative of my work. Of course, the corresponding papers is always the most accurate and complete sourse of information on any of these project. For a quick read, I recommend that you take a look at the few representative ## Performance of convex optimization in high-dimensional structured signal recoveryWe consider the problem of inferring structured high-dimensional signals from noise corrupted
and limited linear measurements. Without exploiting additional structural properties, it would be impossible
to infer information from fewer observations than unknowns. Thus, with the advent of compressed
sensing theory the idea that has been popularized over the past decade or so is to use convex-relaxation techniques
that are capable of taking advantage of the problem structure in computationally efficient ways. In
general, such methods minimize a weighted sum of a convex loss function and a convex regularizer, which
aim to promote noise and signal structure, respectively. However, this poses a number of questions: Combining tools from convex analysis, Gaussian process theory and high-dimensional probability we develop a general framework that fully addresses the challenge of precise characterizations when the measurement vectors are Gaussians; a prototypical setting in compressed sensing theory. Our analysis is based on a new tight comparison theorem of Gaussian processes (see CGMT for details), which serves as the main building block around which we built a powerful framework that applies under general settings on the loss and regularizer functions, and, which is informative about a wide class of different performance metrics (e.g., mean-square-error, support recovery probability, etc.). We further consider ## Quantization of high-dimensional dataIn the problem of structured signal recovery from high-dimensional linear observations, it is commonly assumed that full-precision measurements are available. Under this assumption, the recovery performance of the popular Generalized Lasso (G-Lasso) is by now well-established. In this paper, we extend these types of results to the practically relevant settings with quantized measurements. We study two extremes of the quantization schemes, namely, uniform and one-bit quantization; the former imposes no limit on the number of quantization bits, while the second only allows for one bit. In the presence of a uniform dithering signal and when measurement vectors are sub-gaussian, we show that the same algorithm (i.e., the G-Lasso) has favorable recovery guarantees for both uniform and one-bit quantization schemes. Our theoretical results, shed light on the appropriate choice of the range of values of the dithering signal and accurately capture the error dependence on the problem parameters. For example, our error analysis shows that the G-Lasso with one-bit uniformly dithered measurements leads to only a logarithmic rate loss compared to the full-precision measurements. ## Phase-retrieval via linear programmingWe study algorithms for solving quadratic systems of equations based on optimization methods over polytopes. We study a convex formulation of the phase retrieval problem, which estimates the unknown signal by solving a simple linear program over a polytope constructed from the measurements. We present a sharp characterization of the high-dimensional geometry of the aforementioned polytope under Gaussian measurements. This characterization allows us to derive asymptotically exact performance guarantees for the algorithm, which also reveal a phase transition phenomenon with respect to its sample complexity. Moreover, the geometric insights gained from our analysis lead to a new nonconvex formulation of the phase retrieval problem and an accompanying iterative algorithm, which we call PhaseLamp. We show that this new algorithm has superior recovery performance over the original PhaseMax method. Finally, as yet another variation on the theme of performing phase retrieval via polytope optimization, we propose a weighted version of PhaseLamp and demonstrate, through numerical simulations, that it outperforms several stat eof-the-art algorithms under both generic Gaussian measurements as well as more realistic Fourier-type measurements that arise in phase retrieval applications. ## Provably efficient decoding in Massive MIMO wireless networksThe maximum-likelihood (ML) decoder for symbol detection in large multiple-input multiple-output wireless communication systems is typically computationally prohibitive. In this work, we study a popular and practical alternative, namely the Box-relaxation optimization (BRO) decoder (and its variations), which is a natural convex relaxation of the ML. For iid real Gaussian channels with additive Gaussian noise, we obtain exact asymptotic expressions for the symbol error rate (SER) of the BRO. The formulas are particularly simple, they yield useful insights, and they allow accurate comparisons to idealistic schemes such as the matched filter bound, as well as, to conventional linear decoders such as the zero-forcing decoder. Importantly, the derived formulas can be used to answer important system-design questions, such as: How many receiver antennas are needed to meet system qualifications? What constellation type to use? How sub-optimal are these decoders? We further consider operation of such systems under power constraints which require the use of low-end hardware at the receiver side, such as low-resolution analog-to-digital converters. We propose the use of the BRO and we show that the error performance quickly approaches that of full-resolution measurements when the number of quantization levels is as low as four bits. ## Computational imaging: Non-line-of-sight imaging and lensless camerasImagine trying to image a hidden room through an open doorway, or wanting to prevent collisions by being able to timely detect incoming vehicles at a crossroad. These are instances of the problem of non-line-of-sight (NLOS) imaging. NLOS imaging systems differ from classical imaging systems, since they only have indirect access to the signal of interest via reflections from intermediary objects (e.g., walls, floors, etc.). Typical surfaces of these objects diffusely reflect light, effectively removing beam orientation before they reach the camera, thus, rendering the inverse problem very poorly conditioned. In order to compensate for the losses induced by diffused reflections and turn the problem into a well-conditioned one, our wok develops a computational NLOS imaging technique that exploits natural occluders. We formulate a forward light propagation model that accounts for and exploits the structure that the occluders introduce in the measurements. We combine this with computational routines that use regularization to also promote image priors. Thus, we are able to reveal images of hidden scenes while eliminating the need for highly-specialized ultrafast time-of-flight cameras required in previous techniques. In our subsequent work, we achieve even better reconstruction quality by modifying the computational routine according to the noise structure in low-photon regimes, i.e., Poisson statistics. |