Dr. Deng holds a lecturer position in computer science at the Beijing University of Technology. He received B.Eng. degree, M.Sc. degree from the China University of Petroleum (Beijing), CN, and the University of Florida, USA, in 2016 and 2018, respectively. He obtained the Ph.D. degree from the City University of Hong Kong in 2021, under the supervision of Prof. LI, You-Fu.
His research interests include object recognition, action recognition, optical flow estimation, and deep learning with event-based cameras. He also works on multi-modal fusion, salient object detection, 3D point cloud learning, and semantic segmentation.
Publications （#co-first author, *corresponding author）
- B. Xie#, Y. Deng#, Z. Shao, H. Liu, Y. Li* and H. Chen, “VMV-GCN: Volumetric Multi-View Based Graph CNN for Event Stream Classification,” in IEEE Robotics and Automation Letters (RA-L, with oral presentation in ICRA), December 2021. (Accepted. In press.)
- Y. Deng, H. Chen and Y. Li*, “MVF-Net: A Multi-view Fusion Network for Event-based Object Classification,” in IEEE Transactions on Circuits and Systems for Video Technology (TCSVT), 2021, doi: 10.1109/TCSVT.2021.3073673. (Early Access.)
- Y. Deng, H. Chen, H. Chen and Y. Li*, “Learning From Images: A Distillation Learning Framework for Event Cameras,” in IEEE Transactions on Image Processing (TIP), vol. 30, pp. 4919-4931, 2021, doi: 10.1109/TIP.2021.3077136.
- Chen, H., Li, Y.*, Deng, Y. et al. CNN-Based RGB-D Salient Object Detection: Learn, Select, and Fuse. International Journal of Computer Vision (IJCV), 129, 2076–2096 (2021). https://doi.org/10.1007/s11263-021-01452-0.
- Y. Deng, Y. Li* and H. Chen, “AMAE: Adaptive Motion-Agnostic Encoder for Event-Based Object Classification,” in IEEE Robotics and Automation Letters (RA-L, with oral presentation in IROS), vol. 5, no. 3, pp. 4596-4603, July 2020, doi: 10.1109/LRA.2020.3002480.
- H. Chen, Y. Deng, Y. Li*, T. -Y. Hung and G. Lin, “RGBD Salient Object Detection via Disentangled Cross-Modal Fusion,” in IEEE Transactions on Image Processing (TIP), vol. 29, pp. 8407-8416, 2020, doi: 10.1109/TIP.2020.3014734.