In this talk I will give the traditional camera a fresh, "computational" look: I will show that we can significantly boost the optical performance of a camera by slightly changing the way it captures photos: instead of taking a single snapshot at the press of a button, the camera should record a whole sequence of wide-aperture photos, corresponding to a special type of "focal stack." This sequence is then merged algorithmically into a final photo that the photographer sees.
By generalizing the traditional photographic concepts of "depth of field" and "exposure time" to the case of focal stacks, I will show that this type of photography has two performance advantages: (1) we can capture a given depth of field much faster than one-shot photography allows, and (2) we can significantly increase the quality (i.e., signal-to-noise ratio) of photos captured within a restricted exposure time. I will consider these advantages in detail and discuss their implications for photography.