Advances in digital photography have allowed millions of people to capture images their grandparents could only dream of. Some MIT researchers think they’ve found a way to make those shots look even better.
By combining next-generation camera hardware with new image-processing algorithms, a team from the school’s Media Lab says it’s possible to eliminate many of the washed-out spots that plague overexposed images.
So, instead of a powerfully bright sky overpowering that shot of their family frolicking in the forest, photographers using this new approach could get a nuanced image that shows the texture of the clouds along with the darker understory.
Here’s an example provided by MIT, showing a the ruins of an old church.
In the original shot, the sky is overexposed and turns out as a white blur behind the darker structure. In the middle shot, you see the more nuanced light patterns that would be recorded with the new approach. In the final image, those psychedelic-looking multicolored patterns are converted by special software back into the scene you’d see in nature.
That kind of ultra-realistic composition is already possible through a process called high dynamic range.
In an HDR photograph — iPhones come pre-loaded with the option — multiple images taken at different exposures are combined into one. The result is a vivid-looking shot that gives each individual element its best possible lighting.
The MIT team, however, says it could potentially get better results from a single photograph.
In a paper presented at this spring’s International Conference on Computational Photography, the Media Lab researchers showed how to trick a light sensor into absorbing more than its maximum level of light.
To do so, their paper says, a camera would have to tally up the data about light levels every time a sensor reaches capacity and start over again, making it theoretically capable of capturing unlimited amounts of data.
“This creates a sensor that never saturates,” their paper reads. “Whenever the pixel value gets to its maximum capacity during photon collection, the saturated pixel counter is reset to zero at once, and following photons will cause another round of pixel value increase.”
A camera that does this is known as a “modulo” camera, a reference to modular arithmetic — the kind of counting represented by a 12-hour clock, for example, which resets twice a day instead of counting straight to 24 hours.
Why is that better for photography? In existing high dynamic range photos, the combination of several images into one can result in poor quality because of artifacts and other “ghost” images that don’t get merged together properly, often because of a shaky hand, principal author Hang Zhao said.
The results of their experiments are pretty cool-looking, but Zhao cautions that this kind of photography is far from showing up on your next phone or DSLR camera.
That’s partly because the hardware required to capture these images properly isn’t built into most cameras, which meant the MIT researchers had to build their own prototype. In typical cameras, a special converter turns the light that floods into a camera into digital data that is then recorded as an image.
“Our analog-to-digital part happens at the very time the sensor is capturing the photons,” Zhao said. “That means that our sensor is totally digital, no longer half analog and half digital.”
The research has applications beyond taking better snapshots, of course. Since accurate imagery is critical for guiding robots around their environments, this kind of advanced HDR imaging could help a self-driving car stay on course despite quick light changes, such as entering a tunnel, as MIT notes.
An overexposed shot of an RV, with whited-out areas where the light was too intense. The more nuanced light patterns would be recorded with the new approach.
Patterns detected by the MIT researchers’ approach.
The “recovered” image, with the data turned back into realistic representations of the different light levels.
All images courtesy MIT