MY AMERICAN SCIENTIST
LOG IN! REGISTER!
SEARCH
 
RSS
Logo IMG
HOME > PAST ISSUE > Article Detail

COMPUTING SCIENCE

The Best Bits

A new technology called compressive sensing slims down data at the source

Brian Hayes

Decompressing

The last big challenge of compressive sensing is decompressing! Given a vector of m sums and an m × N matrix of random bits, how do you recover the original signal vector of N elements? It’s one thing to know that a unique solution exists, another to find it.

The approach devised by Candès and his colleagues treats the decoding of the compressed signal as an optimization problem. The aim is to find, among the infinite set of solutions to the m equations, the solution that optimizes some measure of sparseness. The obvious thing to optimize is sparseness itself; in other words, look for the solution that minimizes the number of nonzero signal elements. An algorithm for conducting such a search is straightforward; unfortunately, it is also utterly impractical, requiring a blind search among all possible arrangements of the k nonzero elements. A camera based on this technology would produce pictures you could view only by solving a certifiably hard computational problem.

A more familiar mode of optimization is the method of least squares, known for more than 200 years. This prescription calls for minimizing the sum of the squares of the vector elements. There are efficient methods of finding a least-squares solution, so practicality is not an issue. However, there’s another show-stopper: In compressive sensing the least-squares solution is seldom the correct one. Thus a camera using this algorithm gives you a picture you can see quickly, but the image is badly garbled.

These two optimization methods seem quite different on the surface, but they have a hidden connection. For each vector element x, the least-squares rule calculates x2 and then sums all the results. The search for a sparsest vector can be framed in similar terms, the only change being that x2 is replaced by x0. The zeroth power of 0 is 0, but for any other value of x, x0 is equal to 1. Thus the sum of the zeroth powers counts the number of nonzero elements in the vector. This is just the result we want, but there is no efficient algorithm for finding it.

At this point, having found that x2 doesn’t work and neither does x0, the Goldilocks alternative is irresistible. How about x1? Of course x1 is simply x, and so the optimization strategy derived from this rule amounts to a search for the solution vector whose elements have the smallest sum. There are efficient algorithms for performing this minimization. Equally important, the method produces correct results. Given a sparse input (all but k elements exactly zero), the compression-decompression cycle almost always reconstructs the input exactly. For an approximately sparse input (all but k elements near zero), the reconstruction is a good approximation.

The version of compressive sensing I have presented here is something of a caricature. In particular, the pretense that a signal arrives at the sensor preformatted as a sparse vector of numbers glosses over a great deal of real-world complexity. In practice some transformation (such as conversion from the time domain to the frequency domain) is usually needed. And the random vectors that govern the sampling of the signal may have elements more complicated than 0’s and 1’s. But the basic scheme remains intact. Here is the compressed version of compressive sensing: Find a sparse domain. Sum random subsets of the signal. Decompress by finding the solution that minimizes the sum of the signal elements.





» Post Comment

 

EMAIL TO A FRIEND :

Of Possible Interest

Letters to the Editors: Getting Personal

Letters to the Editors: Global Changes

Letters to the Editors: Powerful Questions

Subscribe to American Scientist