Quick links

Training computers to see the invisible

by Sarah Wells

For many years, Felix Heide has been fascinated by vision – both technological and human. As an assistant professor of computer science at Princeton, Heide has built on this fascination to help computers see: at first, just as well as humans, but now, even better. 

While cameras originally mimicked human biology by taking in information through lenses similar to those of the eye, Heide believes that computers could help us simultaneously see and analyze the world around us in ways far beyond conventional optics.

“Cameras have become a ubiquitous interface between the real world and computers,” Heide says. “We use them either directly, be that for communication in our personal devices, discovery across the sciences, and diagnostics in health, or indirectly, providing input to self-driving vehicles, drones, and other robotic systems.”

Felix Heide standing outside in front of a building with gray brick and glass.

Felix Heide. photo by Sameer A. Khan/Fotobuddy

Heide’s fascination with this intersection between humans and machines started in Germany where he earned a master’s degree studying how computers can be trained to recognize patterns that human eyes may glaze over. This work would bring Heide to pursue his Ph.D. at the University of British Columbia and a postdoctoral position at Stanford University where he continued to zero in on the role that machine learning could play in advancing the use of cameras in our daily lives.

“[Typical camera] sensors record a focused image just as photographic film did in the middle of the last century,” explains Heide. “I am excited about the potential to rethink these capture and vision pipelines of today. Allowing computers to design and evolve cameras may not only make it possible to overcome fundamental limitations of existing 'one-size-fits-all' cameras and vision stacks but also to provide completely new imaging modalities.” 

One crucial modality that is top of mind for not just Heide but computer vision researchers at large is how to create smart technologies, such as self-driving cars, that can see and react to dynamic environments in real time. Teaching these systems how to see what human drivers cannot in these scenarios is a problem that Heide set out to answer when he co-founded the software startup Algolux in 2016. 

As chief technology officer of Algolux, Heide leads research and development for the company’s full autonomous driving stack, including improving end-to-end pipelines for camera range and image processing.

While Heide is no stranger to juggling his time between Algolux and academia, he says that he’s excited to continue his work in an environment like Princeton’s where he says he’s “thrilled” to see entrepreneurial efforts being supported and embraced.

“I have only been at Princeton for a little longer than a year now and find the students and fellow scientists to be amazing,” Heide says. “I truly feel that Princeton is a place where you will see not only breakthrough technology on paper but also being scaled.”

At Princeton, Heide serves as lead researcher in the university’s Computational Imaging Lab where he advises graduate students and teaches advanced courses in computer science, including computer graphics he describes as “the intersection of computer science, geometry, physics, and art.”

Heide arrived at Princeton at the beginning of the COVID-19 pandemic and says that he’s yet to have a completely “normal” teaching experience as a result. In spite of these circumstances, Heide says that his neural rendering course has been particularly well received by students.

Outside his lectures, Heide is also eagerly awaiting a somewhat unusual delivery to his lab this summer: a Mercedes Benz. More than just a joy ride, Heide says, the vehicle will provide his lab an opportunity for rigorous experimental and simulated testing to improve how cars “see” extreme weather. This testing will eventually include a “rain chamber” in Japan.

“We are super excited about the arrival of a test vehicle from Mercedes in Princeton,” Heide says. “This vehicle will allow us to experiment in harsh and extreme conditions. We want to be able to allow the vehicles of tomorrow to see through fog, snow, rain, and even around corners.”

Heide's lab group standing outside on Princeton's campus posing with a test vehicle.  The car's doors are open and it's parked on a gravel road.
In spite of shipping delays, the test Mercedes arrived in Princeton just before summer.  Here, Felix Heide and his research group take photos with the car on campus in front of Easy Pyne Hall. From left to right: Gene Chou, Brian Lou, Mario Bijelic, Praneeth Chakravarthula, Felix Heide, Xiao Li, David Borts 
photo by Sameer A. Khan/Fotobuddy

This Mercedes project is only one of many research projects already underway at Heide’s lab. He also recently contributed to a paper published in Nature Communications about a groundbreaking advance in camera engineering capable of designing cameras the size of a grain of salt. A collaboration between researchers at the University of Washington and Princeton, this work was co-led by Heide’s Ph.D. advisee, Ethan Tseng.

This camera uses machine learning to precisely design millions of microscopic, optical poles that together resolve images in extremely high-definition and without the fuzziness of previous designs. This is a huge step forward for not just the field of neural nano-optics, Heide says, but for the miniaturization of cameras as well. 

“To build a drastically smaller camera, we had to devise a new type of lens system that does not use conventional glass optics but an array of nano-sized antennas,” Heide says. “Instead of grinding glass or injection-molding glass, these new optics can be fabricated at an ultra-small scale in a similar fashion to computer chips.”

Heide says that there are many exciting possibilities for cameras like these, ranging from one day transforming the back of your phone into one large camera to nanophotonic cameras that execute computation through the optics. In the much nearer future though, he says that these nano-scale cameras could play a big role diagnosing and treating disease by imaging the inside of the human body like never before. 

“That is the beauty of working in imaging and vision,” Heide says. “[You’re] at the interface between photons in the real world and computation!”
 

Follow us: Facebook Twitter Linkedin