Yu Bai

Yu Bai has been a Research Associate Professor at the Institute of Mathematical Sciences since 2023. His research interests include artificial intelligence (AI), embedded hardware, neuromorphic computing, nano-scale computing system with novel silicon and post-silicon devices, and low-power digital and mixed-signal complementary metal oxide semiconductor (CMOS) circuit design.

Bai is also an Associate Professor of Computer Engineering at California State University Fullerton. He earned his PhD degree from the Electrical and Computer Engineering Department at the University of Central Florida in 2016. Prior to his academic career, he was at Siemens Energy Inc., working on data analysis.

His research on artificial intelligence, in areas of application ranging from unmanned robots to improving academic performance in STEM, has been supported by grants from the Army Research Office, the National Science Foundation, and the National Academies of Sciences, Engineering, and Medicine.

X. Ma, J. Tang and Y. Bai, “Locality-sensing Fast Neural Network (LFNN): An Efficient Neural Network Acceleration Framework via Locality Sensing for Real-time Videos Queries,” 2023 24th International Symposium on Quality Electronic Design (ISQED), San Francisco, CA, USA, 2023, pp. 1-8

Liu, Mingshuo, Miao Yin, Kevin Han, Ronald F. DeMara, Bo Yuan, and Yu Bai. “Algorithm and hardware co-design co-optimization framework for LSTM accelerator using quantized fully decomposed tensor train.” Internet of Things (2023): 100680

Liu, Mingshuo, Shiyi Luo, Kevin Han, Ronald F. DeMara, and Yu Bai. “Autonomous Binarized Focal Loss Enhanced Model Compression Design Using Tensor Train Decomposition.” Micromachines 13, no. 10 (2022): 1738

M. Liu, P. Borulkar, M. Hossain, R. F. Demara and Y. Bai, “Spin-Orbit Torque Neuromorphic Fabrics for Low-Leakage Reconfigurable In-Memory Computation,” in IEEE Transactions on Electron Devices, vol. 69, no. 4, pp. 1727-1735, April 2022

Xia, Y., Qu, S., Goudos, S., Bai, Y., & Wan, S. (2021). “Multi-object tracking by mutual supervision of CNN and particle filter,” Personal and Ubiquitous Computing, 25(6), 979-988.

A. Samiee, P. Borulkar, R. F. DeMara, P. Zhao and Y. Bai, “Low-Energy Acceleration of Binarized Convolutional Neural Networks using a Spin Hall Effect based Logic-in-Memory Architecture,” in IEEE Transactions on Emerging Topics in Computing, 2019

Caiwen Ding, Siyu Liao, Yanzhi Wang, Zhe Li, Ning Liu, Youwei Zhuo, Chao Wang, Xuehai Qian, Yu Bai, Geng Yuan, Xiaolong Ma, Yipeng Zhang, Jian Tang, Qinru Qiu, Xue Lin, and Bo Yuan. 2017. “CirCNN: accelerating and compressing deep neural networks using block-circulant weight matrices,” in Proceedings of the 50th Annual IEEE/ACM International Symposium on Microarchitecture (MICRO-50 ’17), ACM, New York, NY, USA, 395-408.