New cameras don't just capture photons; they compute pictures
Digital cameras already do more computing than you might think. The image sensor inside the camera is a rectangular array of tiny light-sensitive semiconductor devices called photosites. The image created by the camera is also a rectangular array, made up of colored pixels. Hence you might suppose there's a simple one-to-one mapping between the photosites and the pixels: Each photosite would measure the intensity and the color of the light falling on its surface and assign those values to the corresponding pixel in the image. But that's not the way it's done.
In most cameras, the sensor array is overlain by a patchwork pattern of red, green and blue filters, so that a photosite receives light in only one band of wavelengths. In the final image, however, every pixel includes all three color components. The pixels get their colors through a computational process called de-mosaicing, in which signals from nearby photosites are interpolated in various ways. A single pixel might combine information from a dozen photosites.
The image-processing computer in the camera is also likely to apply a "sharpening" algorithm, accentuating edges and abrupt transitions. It may adjust contrast and color balance as well, and then compress the data for more-efficient storage. Given all this internal computation, it seems a digital camera is not simply a passive recording device. It doesn't take pictures; it makes them. When the sensor intercepts a pattern of illumination, that's only the start of the process that creates an image.
Up to now, this algorithmic wizardry has been directed toward making digital pictures look as much as possible like their wet-chemistry forebears. But a camera equipped with a computer can run more ambitious or fanciful programs. Images from such a computational camera might capture aspects of reality that other cameras miss.