Subscribe
Subscribe
MY AMERICAN SCIENTIST
LOG IN! REGISTER!
SEARCH
 
Logo IMG
HOME > PAST ISSUE > Article Detail

COMPUTING SCIENCE

Computational Photography

New cameras don't just capture photons; they compute pictures

Brian Hayes

The Light Field

We live immersed in a field of light. At every point in space, rays of light arrive from every possible direction. Many of the new techniques of computational photography work by extracting more information from this luminous field.

Here's a thought experiment: Remove an image sensor from its camera and mount it facing a flat-panel display screen. Suppose both the sensor and the display are square arrays of size 1,000x1,000; to keep things simple, assume they are monochromatic devices. The pixels on the surface of the panel emit light, with the intensity varying from point to point depending on the pattern displayed. Each pixel's light radiates outward to reach all the photosites of the sensor. Likewise each photosite receives light from all the display pixels. With a million emitters and a million receivers, there are 1012 interactions. What kind of image does the sensor produce? The answer is: A total blur. The sensor captures a vast amount of information about the energy radiated by the display, but that information is smeared across the entire array and cannot readily be recovered.

Now interpose a pinhole between the display and the sensor. If the aperture is small enough, each display pixel illuminates exactly one sensor photosite, yielding a sharp image. But clarity comes at a price, namely throwing away all but a millionth of the incident light. Instead of having 1012 exchanges between pixels and photosites, there are only 106.

A lens is less wasteful than a pinhole: It bends light, so that an entire  cone of rays emanating from a pixel is made to reconverge on a photosite. But if the lens does its job correctly, it still enforces a one-pixel, one-photosite rule. Moreover, objects are in focus only if their distance from the lens is exactly right; rays originating at other distances are focused to a disk rather than a point, causing blur.

Photography with any conventional camera—digital or analog—is an art of compromise. Open up the lens to a wide aperture and it gathers plenty of light, but this setting also limits depth of field; you can't get both ends of a horse in focus. A slower shutter (longer exposure time) allows you to stop down the lens and thereby increase the depth of field; but then the horse comes out unblurred only if it stands perfectly still. A fast shutter and a narrow aperture alleviate the problems of depth of field and motion blur, but the sensor receives so few photons that the image is mottled by random noise.

Computational photography can ease some of these constraints. In particular, capturing additional information about the light field allows focus and depth of field to be corrected after the fact. Other techniques can remove motion blur.








comments powered by Disqus
 

EMAIL TO A FRIEND :

Subscribe to American Scientist