Researchers from The Hong Kong Polytechnic University (PolyU) and Yonsei University in Seoul have developed vision sensors that emulate and even surpass the human retina’s ability to adapt to various lighting vision.
These bioinspired sensors could usher in the next generation of artificial-vision systems used in autonomous vehicles and manufacturing, as well as finding exciting new applications in edge computing and the Internet of Things.
"They will greatly improve machine vision systems used for visual analysis and identification tasks," said Dr. Chai Yang, associate professor at the Department of Applied Physics, and Assistant Dean (Research), Faculty of Applied Science and Textiles, PolyU, who led the research.
Improving machine vision
Machine vision systems are cameras and computers that capture and process images for tasks such as facial recognition. They need to be able to "see" objects in a wide range of lighting conditions, which demands intricate circuitry and complex algorithms. Such systems are rarely efficient enough to process a large volume of visual information in real time—unlike the human brain.
The new bio-inspired sensors may offer a solution through directly adapting different light intensities by the sensors, instead of relying on backend computation. The human eye adapts to different levels of illumination, from very dark to very bright and vice versa, which allows us to identify objects accurately under a range of lighting conditions. The new sensors aim to mimic this adaptability.
"The human pupil may help adjust the amount of light entering the eye," said Dr. Chai, "but the main adaptation to brightness is performed by retina cells."
Natural light intensity spans 280 dB. The new sensors developed by Dr Chai's team have an effective range of up to 199 dB, compared with only 70 dB for conventional silicon-based sensors. The human retina can adapt to environments under sunlight to starlight, with a range of about 160 dB.
Light detectors developed
To achieve this, the research team developed light detectors, called phototransistors, using a dual layer of atomic-level ultrathin molybdenum disulphide, a semiconductor with unique electrical and optical properties. The researchers then introduced "charge trap states"—impurities or imperfections in a solid's crystalline structure that restrict the movement of charge—to the dual layer.
"These trap states enable the storage of light information," the researchers reported, "and dynamically modulate the optoelectronic properties of the device at the pixel level." By controlling the movement of electrons, the trap states enabled the researchers to precisely adjust the amount of electricity conducted by the phototransistors. This in turn allowed them to control the device's photosensitivity, or its ability to detect light.
Each of the new vision sensors is made up of arrays of such phototransistors. They mimic the rod and cone cells of the human eye, which are respectively responsible for detecting dim and bright light. As a result, the sensors can detect objects in differently lit environments as well as switch between, and adapt to, varying levels of brightness—with an even greater range than the human eye.
"The sensors reduce hardware complexity and greatly increase the image contrast under different lighting conditions," said Dr Chai, "thus delivering high image recognition efficiency."