Skip to Content

How your phone learned to see in the dark

By Samantha Kelly

Open up Instagram at any given moment and it probably won’t take long to find crisp pictures of the night sky, a skyline after dark or a dimly lit restaurant. While shots like these used to require advanced cameras, they’re now often possible from the phone you already carry around in your pocket.

Tech companies such as Apple, Samsung and Google are investing resources to improve their night photography options at a time when camera features have increasingly become a key selling point for smartphones that otherwise largely all look and feel the same from one year to the next.

Earlier this month, Google brought a faster version of its Night Sight mode, which uses AI algorithms to lighten or brighten images in dark environments, to more of its Pixel models. Apple’s Night mode, which is available on models as far back as the iPhone 11, was touted as a premier feature on its iPhone 14 lineup last year thanks to its improved camera system.

These tools have come a long way in just the past few years, thanks to significant advancements in artificial intelligence technology as well as image processing that has become sharper, quicker, and more resilient to challenging photography situations. And smartphone makers aren’t done yet.

“People increasingly rely on their smartphones to take photos, record videos, and create content,” said Lian Jye Su, an artificial intelligence analyst at ABI Research. “[This] will only fuel the smartphone companies to up their games in AI-enhanced image and video processing.”

While there has been much focus lately on Silicon Valley’s renewed AI arms race over chatbots, the push to develop more sophisticated AI tools could also help further improve night photography and bring our smartphones closer to being able to see in the dark.

How it works

Samsung’s Night mode feature, which is available on various Galaxy models but optimized for its premium S23 Ultra smartphone, promises to do what would have seemed unthinkable just five to 10 years ago: enable phones to take clearer pictures with little light.

The feature is designed to minimize what’s called “noise,” a term in photography that typically refers to poor lighting conditions, long exposure times, and other elements that can take away from the quality of an image.

The secret to reducing noise, according to the company, is a combination of the S23 Ultra’s adaptive 200M pixel sensor. After the shutter button is pressed, Samsung uses advanced multi-frame processing to combine multiple images into a single picture and AI to automatically adjust the photo as necessary.

“When a user takes a photo in low or dark lighting conditions, the processor helps remove noise through multi-frame processing,” said Joshua Cho, executive vice president of Samsung’s Visual Solution Team. “Instantaneously, the Galaxy S23 Ultra detects the detail that should be kept, and the noise that should be removed.”

For Samsung and other tech companies, AI algorithms are crucial to delivering photos taken in the dark. “The AI training process is based on a large number of images tuned and annotated by experts, and AI learns the parameters to adjust for every photo taken in low-light situations,” Su explained.

For example, algorithms identify the right level of exposure, determine the correct color pallet and gradient under certain lighting conditions, sharpen blurred faces or objects artificially, and then makes those changes. The final result, however, can look quite different from what the person taking the picture saw in real time, in what some might argue is a technical sleight-of-hand trick.

Google is also focused on reducing noise in photography. Its AI-powered Night Sight feature captures a burst of longer-exposure frames. It then uses something called HDR+ Bracketing, which creates several photos with different settings. After a picture is taken, the images are combined together to create “sharper photos” even in dark environments “that are still incredibly bright and detailed,” said Alex Schiffhauer, a group product manager at Google.

While effective, there can be a slight but noticeable delay before the image is ready. But Schiffhauer said Google intends to speed up this process more on future Pixel iterations. “We’d love a world in which customers can get the quality of Night Sight without needing to hold still for a few seconds,” Schiffhauer said.

Google also has an astrophotography feature which allows people to take shots of the night sky without needing to tweak the exposure or other settings. The algorithms detect details in the sky and enhances them to stand out, according to the company.

Apple has long been rumored to be working on an astrophotography feature, but some iPhone 14 Pro Max users have successfully been able to capture pictures of the sky through its existing Night Mode tool. When a device detects a low-light environment, Night mode turns on to capture details and brighten shots. (The company did not respond to a request to elaborate on how the algorithms work.)

AI can make a difference in the image, but the end results for each of these features also depend on the phone’s lenses, said Gartner analyst Bill Ray. A traditional camera will have the lens several centimeters from the sensor, but the limited space on a phone often requires squeezing things together, which can result in a more shallow depth of field and reduced image quality, especially in darker environments.

“The quality of the lens is still a big deal, and how the phone addresses the lack of depth,” Ray said.

The next big thing

While night photography on phones has come a long way, a buzzy new technology could push it ahead even more.

Generative AI, the technology that powers the viral chatbot ChatGPT, has earned plenty of attention for its ability to create compelling essays and images in response to user prompts. But these AI systems, which are trained on vast troves of online data, also have potential to edit and process images.

“In recent years, generative AI models have also been used in photo-editing functions like background removal or replacement,” Su said. If this technology is added to smartphone photo systems, it could eventually make night modes even more powerful, Su said.

Big Tech companies, including Google, are already fully embracing this technology in other parts of their business. Meanwhile, smartphone chipset vendors like Qualcomm and MediaTek are looking to support more generative AI applications natively on consumer devices, Su said. These include image and video augmentation.

“But this is still about two to three years away from limited versions of this showing up on smartphones,” he said.

The-CNN-Wire
™ & © 2023 Cable News Network, Inc., a Warner Bros. Discovery Company. All rights reserved.

Article Topic Follows: CNN - Social Media/Technology

Jump to comments ↓

CNN Newsource

BE PART OF THE CONVERSATION

News Channel 3 is committed to providing a forum for civil and constructive conversation.

Please keep your comments respectful and relevant. You can review our Community Guidelines by clicking here

If you would like to share a story idea, please submit it here.

Skip to content