Subscribe
Subscribe
MY AMERICAN SCIENTIST
LOG IN! REGISTER!
SEARCH
 
Logo IMG
HOME > PAST ISSUE > July-August 2009 > Article Detail

COMPUTING SCIENCE

The Best Bits

A new technology called compressive sensing slims down data at the source

Brian Hayes

Vectors and Coins

Unknown sparse vector to randomly selected subsetsClick to Enlarge ImageIn my efforts to understand recent work on compressive sensing, I have found it helpful to set aside all the messy details of sound waves and photons, reducing the process to its mathematical essentials. Imagine that the signal arrives at the sensor already in numerical form; it is a vector, a sequence of N numbers. We are assured that this signal vector is sparse: Only k of the elements are nonzero. But we know nothing about where those k elements lie or what their values are. Our task is to determine the positions and values of the nonzero elements with as few measurements as possible.

In this abstract and simplified version of the sensing process, “making a measurement” means selecting some subset of the signal elements and determining their total value. If the subset happens to be a single element, then the measurement yields that element’s value directly. If we choose multiple elements, we can measure all of them at once, but we learn only the sum of their values, with no information about how the sum is apportioned among the individual elements.

The classical sensing protocol measures each element individually. In this way we are sure to learn the whereabouts of all k nonzero elements, but the process takes N operations. If there is to be any hope of solving the problem with fewer measurements, we’ll somehow have to get more information out of each measurement.

By ganging together pairs of adjacent elements we can cover the entire N-element vector with just N/2 measurements. However, all we learn about each pair of values is their sum. We have reduced the resolution without gaining any information. Lumping together more than two elements allows even fewer measurements but makes the ambiguity still worse. The strategy doesn’t look promising.

But wait! We are ignoring a vital fact: The vector is known to be sparse, with most of the entries equal to zero. Perhaps by exploiting this knowledge we can make further progress.

Suppose the vector is the ultimate in sparseness—a long string of zeros with just one nonzero number lost somewhere among them. Then the process of measurement becomes a search for the one distinguished element. For this problem there are well-known strategies much better than checking the elements one by one.

The task brings to mind a genre of mathematical puzzles about counterfeit coins. In one such puzzle you are given a sack of N coins and told that one of them is underweight. Using a scale calibrated in units of a genuine coin’s weight, how many weighings does it take to identify the bogus coin? Putting coins on the scale in batches rather than individually can reduce the worst-case number of weighings from N to the logarithm of N. For a sack of eight coins, here is a recipe for solving the problem in three weighings: First weigh the set {1,2,3,4}, then {1,2,5,6} and finally {1,3,5,7}. The results of the three trials surely identify the light coin. For example, if the first and second sets are both light but the third set is correct, then coin 2 must be the culprit; if no set is light, coin 8 is implicated. (Going from eight weighings to three may not seem all that impressive, but with a million coins the same trick would reduce the number of weighings from 1,000,000 to about 20.)

An algorithm based on this kind of reasoning could solve the compressive-sensing problem for the special case of k=1. Unfortunately, this success is of no practical importance because signals with a single significant component—such as the pure sine wave mentioned earlier—are vanishingly rare in the real world. But a variation of the method can be made to work for larger k.








comments powered by Disqus
 

EMAIL TO A FRIEND :

Subscribe to American Scientist