TY - JOUR
T1 - Computational event-driven vision sensors for in-sensor spiking neural networks
AU - Zhou, Yue
AU - Fu, Jiawei
AU - Chen, Zirui
AU - Zhuge, Fuwei
AU - Wang, Yasai
AU - Yan, Jianmin
AU - Ma, Sijie
AU - Xu, Lin
AU - Yuan, Huanmei
AU - Chan, Mansun
AU - Miao, Xiangshui
AU - He, Yuhui
AU - Chai, Yang
N1 - Publisher Copyright:
© 2023, The Author(s), under exclusive licence to Springer Nature Limited.
PY - 2023/11
Y1 - 2023/11
N2 - Neuromorphic event-based image sensors capture only the dynamic motion in a scene, which is then transferred to computation units for motion recognition. This approach, however, leads to time latency and can be power consuming. Here we report computational event-driven vision sensors that capture and directly convert dynamic motion into programmable, sparse and informative spiking signals. The sensors can be used to form a spiking neural network for motion recognition. Each individual vision sensor consists of two parallel photodiodes with opposite polarities and has a temporal resolution of 5 μs. In response to changes in light intensity, the sensors generate spiking signals with different amplitudes and polarities by electrically programming their individual photoresponsivity. The non-volatile and multilevel photoresponsivity of the vision sensors can emulate synaptic weights and can be used to create an in-sensor spiking neural network. Our computational event-driven vision sensor approach eliminates redundant data during the sensing process, as well as the need for data transfer between sensors and computation units.
AB - Neuromorphic event-based image sensors capture only the dynamic motion in a scene, which is then transferred to computation units for motion recognition. This approach, however, leads to time latency and can be power consuming. Here we report computational event-driven vision sensors that capture and directly convert dynamic motion into programmable, sparse and informative spiking signals. The sensors can be used to form a spiking neural network for motion recognition. Each individual vision sensor consists of two parallel photodiodes with opposite polarities and has a temporal resolution of 5 μs. In response to changes in light intensity, the sensors generate spiking signals with different amplitudes and polarities by electrically programming their individual photoresponsivity. The non-volatile and multilevel photoresponsivity of the vision sensors can emulate synaptic weights and can be used to create an in-sensor spiking neural network. Our computational event-driven vision sensor approach eliminates redundant data during the sensing process, as well as the need for data transfer between sensors and computation units.
UR - https://www.webofscience.com/wos/woscc/full-record/WOS:001101676000004
UR - https://openalex.org/W4388625662
UR - https://www.scopus.com/pages/publications/85176401249
U2 - 10.1038/s41928-023-01055-2
DO - 10.1038/s41928-023-01055-2
M3 - Journal Article
SN - 2520-1131
VL - 6
SP - 870
EP - 878
JO - Nature Electronics
JF - Nature Electronics
IS - 11
ER -