Authors:Schiebinger G; Robeva E, Recht B. Pages: 1 - 30 Abstract: AbstractThis article provides a theoretical analysis of diffraction-limited superresolution, demonstrating that arbitrarily close point sources can be resolved in ideal situations. Precisely, we assume that the incoming signal is a linear combination of $M$ shifted copies of a known waveform with unknown shifts and amplitudes, and one only observes a finite collection of evaluations of this signal. We characterize properties of the base waveform such that the exact translations and amplitudes can be recovered from $2M+1$ observations. This recovery can be achieved by solving a weighted version of basis pursuit over a continuous dictionary. Our analysis shows that $\ell_1$-based methods enjoy the same separation-free recovery guarantees as polynomial root finding techniques, such as de Prony’s method or Vetterli’s method for signals of finite rate of innovation. Our proof techniques combine classical polynomial interpolation techniques with contemporary tools from compressed sensing. PubDate: Mon, 29 May 2017 00:00:00 GMT DOI: 10.1093/imaiai/iax006 Issue No:Vol. 7, No. 1 (2017)

Authors:Zhu Y; Lafferty J. Pages: 31 - 82 Abstract: AbstractWe formulate the notion of minimax estimation under storage or communication constraints, and prove an extension to Pinsker's theorem for non-parametric estimation over Sobolev ellipsoids. Placing limits on the number of bits used to encode any estimator, we give tight lower and upper bounds on the excess risk due to quantization in terms of the number of bits, the signal size and the noise level. This establishes the Pareto optimal tradeoff between storage and risk under quantization constraints for Sobolev spaces. Our results and proof techniques combine elements of rate distortion theory and minimax analysis. The proposed quantized estimation scheme, which shows achievability of the lower bounds, is adaptive in the usual statistical sense, achieving the optimal quantized minimax rate without knowledge of the smoothness parameter of the Sobolev space. It is also adaptive in a computational sense, as it constructs the code only after observing the data, to dynamically allocate more codewords to blocks where the estimated signal size is large. Simulations are included that illustrate the effect of quantization on statistical risk. PubDate: Wed, 21 Jun 2017 00:00:00 GMT DOI: 10.1093/imaiai/iax007 Issue No:Vol. 7, No. 1 (2017)

Authors:Baraniuk R; Foucart S, Needell D, et al. Pages: 83 - 104 Abstract: AbstractOne-bit compressive sensing has extended the scope of sparse recovery by showing that sparse signals can be accurately reconstructed even when their linear measurements are subject to the extreme quantization scenario of binary samples—only the sign of each linear measurement is maintained. Existing results in one-bit compressive sensing rely on the assumption that the signals of interest are sparse in some fixed orthonormal basis. However, in most practical applications, signals are sparse with respect to an overcomplete dictionary, rather than a basis. There has already been a surge of activity to obtain recovery guarantees under such a generalized sparsity model in the classical compressive sensing setting. Here, we extend the one-bit framework to this important model, providing a unified theory of one-bit compressive sensing under dictionary sparsity. Specifically, we analyze several different algorithms—based on convex programming and on hard thresholding—and show that, under natural assumptions on the sensing matrix (satisfied by Gaussian matrices), these algorithms can efficiently recover analysis–dictionary-sparse signals in the one-bit model. PubDate: Thu, 10 Aug 2017 00:00:00 GMT DOI: 10.1093/imaiai/iax009 Issue No:Vol. 7, No. 1 (2017)

Authors:Fernandez-Granda C; Tang G, Wang X, et al. Pages: 105 - 168 Abstract: AbstractWe consider the problem of super-resolving the line spectrum of a multisinusoidal signal from a finite number of samples, some of which may be completely corrupted. Measurements of this form can be modeled as an additive mixture of a sinusoidal and a sparse component. We propose to demix the two components and super-resolve the spectrum of the multisinusoidal signal by solving a convex program. Our main theoretical result is that—up to logarithmic factors—this approach is guaranteed to be successful with high probability for a number of spectral lines that is linear in the number of measurements, even if a constant fraction of the data are outliers. The result holds under the assumption that the phases of the sinusoidal and sparse components are random and the line spectrum satisfies a minimum-separation condition. We show that the method can be implemented via semi-definite programming, and explain how to adapt it in the presence of dense perturbations as well as exploring its connection to atomic-norm denoising. In addition, we propose a fast greedy demixing method that provides good empirical results when coupled with a local non-convex-optimization step. PubDate: Mon, 29 May 2017 00:00:00 GMT DOI: 10.1093/imaiai/iax005 Issue No:Vol. 7, No. 1 (2017)