Researchers at Stanford University and UC San Diego developed a 4D camera that improves robotic vision. It uses a spherical lens and advanced algorithms to capture information across a 138-degree field of view to allow robots to not only navigate, but also better understand their environment.
The camera dispenses with the fiber bundles in favor of a combination of lenslets developed by UC San Diego and digital signal processing and light field photography technology from Stanford, which gives the camera a fourth dimension. This light field technology takes the two-axis direction of the light entering the lens and mixes it with the 2D image. The image now contains much more information about the light position and direction and allows images to be refocused after they’ve been captured. It allows a robot to see through things that could obscure their vision, such as rain. The camera is also able to improve close-up images and better ascertain object distances and surface textures. It could be helpful for robot in limited space to understand surrounding. The AI technology could understand how far away objects are and what they’re made of, whether they’re moving.
The camera is presently a proof-of-concept device, but in the nearest future it could help robots to navigate in small areas, land drones, control self-driving cars, and enable augmented VR. The video below shows the first images from the Wide-FOV Monocentric Light Field Camera.
Source: UC San Diego
Comment this news or article