Say cheese! Researchers have developed a tiny camera that takes amazingly sharp pictures. Don’t sneeze while it’s in your hand. -be never.
Smaller cameras could mean lighter smartphones and new James Bond-style gadgets. But that’s not all. Cameras on this scale could swim through the body, hitch a ride on a bug, probe your brain, or monitor hostile environments. And these are just a few of the possibilities.
How do you pack so much shooting power into something the size of a crumb? It takes a “radically different approach” to making a camera lens, says Felix Heide. He is a computer scientist at Princeton University in New Jersey. His lab developed the camera with colleagues at the University of Washington in Seattle. The team shared their work in Nature Communication in November.
Cameras have two main parts: a lens and a sensor. The lens deflects incoming light onto the sensor, where the image is recorded. Over the past few decades, sensors have become smaller and smaller. But lentils are another story. “The lenses haven’t been miniaturized at all,” says Heide. Most are designed a little differently today than they were in the 1800s.
Lenses are traditionally made by stacking curved pieces of glass or plastic. A curved surface deflects the light passing through it. The curvature of light depends on the curvature. A lens can be a simple piece of glass. To bend the light in multiple ways, you can stack several of them.
Heide’s team took a completely different approach. They made a lens from a metasurface. These surfaces are super-thin man-made materials, patterned with tiny structures. The structures are so small that they are measured in billionths of a meter (nanometers). Similar but slightly thicker materials are called metamaterials.
“Metamaterials interact with light in entirely new ways not found in nature,” says Natalia Litchinitser. She is an electrical engineer at Duke University in Durham, North Carolina. How they interact depends on the structures – their shape, density, pattern and composition. This is also the case for metasurfaces.
With the right design, metasurfaces can become miniature lenses or mirrors. This means they can squeeze into small spaces and reveal things people haven’t seen before. Another plus? They can be made for pennies. This is because you can make them using the process developed to produce computer chips.
Still, warns Litchinitser, this technology is relatively new and has its limitations. For example, metasurface lenses often produce blurry images or images with colored halos around the edges. Litchinitser credits the creators of the new camera for developing computer programs to overcome these problems. For these programs, the researchers turned to artificial intelligence, or AI.
“Learn” to get better photos
When TikTok or Snapchat recognizes your face in a photo and applies a filter, that’s AI at work. The more you use these features, the more their machine learning can identify you. This is because these programs learn from their mistakes.
With a similar approach, Heide’s team addressed two key challenges for metasurface cameras: lens design and image quality. To get a high-quality image, they needed a metasurface with over 1.5 million metallic structures. But how should the structures be arranged to get the best picture? It would take far too much time and computing power to explore all the possibilities.
Fortunately, there is a shortcut. The team wrote a computer program that simulates light traveling through a lens and the image it creates. Then the program tweaked the lens design and ran the simulation again. He compared the new image to the previous one and judged which was better. As the program went through different possibilities, it learned a little each time about how best to tweak the design and get the best image.
But even a perfect lens design won’t deliver sharp, clear images unless you take on another challenge. No metasurface lens can perfectly focus all the light rays that pass through it. This introduces blurring. To deal with this, the team wrote a second computer program. He was looking at images of a simulated scene. The images were blurry in different ways. By going through the images and comparing them to the original scene, the program has learned to correct each type of blur. The result: an image processing program that made images crisp and sharp.
A lens of just 0.5 cubic millimeters (one 300 millionth of a cubic inch) now rivals the quality of a traditional camera lens 550,000 times that volume. Much like its much bulkier predecessor, images from the new camera are crisp, colorful and capture a wide field of view. You could even take a selfie with it. For now, however, the team is extra careful and keeping him out of the way.
Ethan Tseng is a computer science graduate student in Heide’s lab. He co-led the project with a student from the University of Washington. “We live in very exciting times,” says Tseng. “We’re seeing all kinds of cool technologies that can completely change the way we originally thought we would build things.”
This is part of a series featuring news on technology and innovation, made possible through the generous support of the Lemelson Foundation.