MY AMERICAN SCIENTIST
LOG IN! REGISTER!
SEARCH
 
RSS
Logo IMG
HOME > PAST ISSUE > March-April 2008 > Article Detail

COMPUTING SCIENCE

Computational Photography

New cameras don't just capture photons; they compute pictures

Brian Hayes

The Computational Eye

Some of the innovations described here may never get out of the laboratory, and others are likely to be taken up only by Hollywood cinematographers. But a number of these ideas seem eminently practical. For example, the flutter shutter could be incorporated into a camera without extravagant expense. In the case of the microlens array for recording light fields, Ng is actively working to commercialize the technology. (See refocusimaging.com.)

If some of these techniques do catch on, I wonder how they will change the way we think about photography. "The camera never lies" was always a lie; and yet, despite a long history of airbrush fakery followed by Photoshop fraud, photography retains a special status as a documentary art, different from painting and other more obviously subjective and interpretive forms of visual expression. At the very least, people tend to assume that every photograph is a photograph of something—that it refers to some real-world scene.

Digital imagery has already altered the perception of photography. In the age of silver emulsions, one could think of a photograph as a continuum of tones or hues, but a digital image is a finite array of pixels, each displaying a color drawn from a discrete spectrum. It follows that a digital camera can produce only a finite number of distinguishable images. That number is enormous (perhaps 10100,000,000), so you needn't worry that your camera will run out of pictures or start to repeat itself. Still, the mere thought that images are a finite resource can bring about a change in attitude.

Acknowledging that a photograph is a computed object—a product of algorithms—may work a further change. It takes us another step away from the naive notion of a photograph as frozen photons, caught in mid-flight. Neuroscientists have recognized that the faculty of vision resides more in the brain than in the eye; what we "see" is not a pattern on the retina but a world constructed through elaborate neural processing of such  patterns. It seems the camera is evolving in the same direction, that the key elements are not photons and electrons, or even pixels, but higher-level structures that convey the meaning of an image.

© Brian Hayes

Bibliography

  • Adelson, Edward H., and John Y. A. Wang. 1992. Single lens stereo with a plenoptic camera. IEEE Transactions on Pattern Analysis and Machine Intelligence 14(2):99-106.
  • Bimber, Oliver. 2006. Computational photography—the next big step. (Introduction to special issue on computational photography.) Computer 39(8):28-29.
  • Gortler, Steven J., Radek Grzeszczuk, Richard Szeliski and Michael F. Cohen. 1996. The lumigraph. In Proceedings of the International Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 1996, pp. 43-54.
  • Levin, Anat, Rob Fergus, Frédo Durand and William T. Freeman. 2007.  Image and depth from a conventional camera with a coded aperture. In Proceedings of the International Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 2007, article no. 70.
  • Levoy, Marc, and Pat Hanrahan. 1996. Light field rendering. In Proceedings of the International Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 1996, pp. 31-42.
  • Ng, Ren. 2005. Fourier slice photography. In Proceedings of the International Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 2005, pp. 735-744.
  • Ng, Ren, Marc Levoy, Mathieu Brédif, Gene Duval, Mark Horowitz and Pat Hanrahan. 2005. Light field photography with a hand-held plenoptic camera. Stanford University Computer Science Tech Report CSTR 2005-02. http://graphics.stanford.edu/papers/lfcamera/lfcamera-150dpi.pdf
  • Raskar, Ramesh, Kar-Han Tan, Rogerio Feris, Jingyi Yu and Matthew Turk. 2004. Non-photorealistic camera: Depth edge detection and stylized rendering using multi-flash imaging. In Proceedings of the International Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 2004, pp. 679-688.
  • Raskar, Ramesh, Amit Agrawal and Jack Tumblin. 2006. Coded exposure photography: Motion deblurring using fluttered shutter. ACM Transactions on Graphics 25:795-804.
  • Sen, Pradeep, Billy Chen, Gaurav Garg, Stephen R. Marschner, Mark Horowitz, Marc Levoy and Hendrik P. A. Lensch. 2005. Dual photography. In Proceedings of the International Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 2005, pp. 745-755.
  • Veeraraghavan, Ashok, Ramesh Raskar, Amit Agrawal, Ankit Mohan and Jack Tumblin. 2007. Dappled photography: Mask enhanced cameras for heterodyned light fields and coded aperture refocusing. In Proceedings of the International Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 2007, article no. 69.
  • Wilburn, Bennett, Neel Joshi, Vaibhav Vaish, Talvala Eino-Ville, Emilio Antunez, Adam Barth, Andrew Adams, Mark Horowitz and Marc Levoy. 2005. High performance imaging using large camera arrays. In Proceedings of the International Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 2005, pp. 765-776.




» Post Comment

 

EMAIL TO A FRIEND :

Subscribe to American Scientist