MY AMERICAN SCIENTIST
LOG IN! REGISTER!
SEARCH
 
RSS
Logo IMG
HOME > PAST ISSUE > Article Detail

COMPUTING SCIENCE

Computational Photography

New cameras don't just capture photons; they compute pictures

Brian Hayes

Staying Focused

Shifting the point of view is one of the simpler operations made possible by a light-field camera; less obvious is the ability to adjust focus and depth of field.

When the image of an object is out of focus, light that ought to be concentrated on one photosite is spread out over several neighboring sites—covering an area known as the circle of confusion. The extent of the spreading depends on the object's distance from the camera, compared with the ideal distance for the focal setting of the lens. If the actual distance is known, then the size of the circle of confusion can be calculated, and the blurring can be undone algorithmically. In essence, light is subtracted from the pixels it has leaked into and is restored to its correct place. The operation has to be repeated for each pixel.

To put this scheme into action, we need to know the distance from the camera to each point in the scene—the point's depth. For a conventional photograph, depth cues are hard to come by, but the light-field camera encodes a depth map within the image data. The key is parallax: an object's apparent shift in position when the viewer moves. In general, an object will occupy a slightly different set of pixels in each of the subimages of the microlens camera; the magnitude and direction of the displacements depend on the object's depth within the scene. The depth information is similar to that in a stereoscopic photograph, but based on data from many images instead of just two.

Recording a four-dimensional light field allows for more than just fixing a misfocused image. With appropriate software for viewing the stored data set, the photographer can move the plane of focus back and forth through the scene, or can create a composite image with high depth of field, where all planes are in focus. With today's cameras, focus is something you have to get right before you click the shutter, but in the future it could join other parameters (such as color and contrast) that can be adjusted after the fact.

The microlens array is not the only approach to computing focus and depth of field. Anat Levin, Rob Fergus, Frédo Durand and William T. Freeman of MIT have recently described another technique, based on a "coded aperture." Again the idea is to modify a normal camera, but instead of inserting microlenses near the sensor, a patterned mask or filter is placed in the aperture of the main lens. The pattern consists of opaque and transparent areas. The simplest mask is a half-disk that blocks half the aperture. You might think such a screen would merely cast a shadow over half the image, but in fact rays from the entire scene reach the entire sensor area by passing through the open half of the lens. The half-occluded aperture does alter the blurring of out-of-focus objects, however, making it asymmetrical. Detecting this asymmetry provides a tool for correcting the focus. The ideal mask is not a simple half-disk but a pattern with openings of various sizes, shapes and orientations. It is called a coded aperture because it imposes a distinctive code or signature on the image data.

Further experiments with coded masks have been reported by Ashok Veeraraghavan, Ramesh Raskar and Amit Agrawal of the Mitsubishi Electric Research Laboratory and Ankit Mohan and Jack Tumblin of Northwestern University. They show that such masks are useful not only in the aperture of the lens but also near the sensor plane, where they perform the same function as the array of microlenses in Ng's light-field camera. The "dappling" of the image by the nearby mask encodes the directional information needed to recover the light field.





» Post Comment

 

EMAIL TO A FRIEND :

Subscribe to American Scientist