For years, it’s been a struggle for robotics engineers and researchers to provide these machines and devices with better vision. In the past, concepts like stereoscopic cameras, color analysis, pixel counting, laser imaging, and deep learning have been attempted with mixed results. Conventional cameras only exhibit a small fraction of what optical technologies in drones and other robotics are truly capable of surveying.
Visual sensor platforms in the form of Lidar and thermal imaging are capable of providing more intricate ways for autonomous technologies and AI to map out their surroundings. These optical platforms have begun to get applied on AI and autonomous technologies, but face hindering issues regarding image quality, battery life, and performance efficiency.
Researchers and engineers from Stanford and UC San Diego have begun collaborating their efforts in the development of a new camera that are utilizing the more innovative optical platforms available. Described as the first of its kind, the “four-dimensional” camera features a single-lens light field that contains a wide field of view. The camera operates with spherical lenses that were originally developed for DARPA’s Soldier CENtris Imaging with Computational Cameras (SCENICC) Program.
These lenses feature a view nearly one-third of the circle around a camera, which helps form 360-degree images while revolving at 125 megapixels per video frame. The camera dispenses with the fiber bundles, and utilizes a combination of lenslets instead, developed by UC San Diego and digital signal processing and light field photography technology from Stanford, which the team says provides the camera with its “fourth dimension” features.
Light field technology utilizes the two-axis direction of the light entering the lens, mixing it with 2D imaging. As a result, the image contains more information about light position, direction, and allows for the refocusing of images after they’re captured. This allows for robots and AI technologies to see through things and conditions that would normally obscure their vision, like precipitating weather. The camera also has improved close-up imaging capabilities, along with being better at ascertaining object distances and surface textures. This could make for various AI technology to configure their precise distances from particular objects, and even determine whether they’re moving or their composition.
The camera is presently a proof-of-concept device, but researchers believe the mature version of this technology will help robots navigate through complex and intimately-spaced vicinities, land drones, aid autonomous technologies, and even be utilized by augmented reality systems to produce seamless integrated rendering. The camera is expected to make navigation software less tedious for drones and other mobile technologies, by reducing latency and flight errors that could compromise safe navigation and landing in complex environments. By changing how drones see the world, cameras let the drone’s remote operators fly simpler, faster, and more disciplined.
Filed Under: M2M (machine to machine)