I hold an M.Phil degree from the Intelligent Driving Lab (iDLab) at Tsinghua University, working with Prof. Shengbo Li and Prof. Bo Cheng. My research covers neural network, reinforcement learning, autonomous driving and quantum computing. I am dedicated to building more intelligent and safer AI for automated vehicles and robotics, while also developing the next-generation paradigm for neural network training.
Tsinghua University
Mechanical Engineering, M.Phil
Data Science, Certificate Program
Delft University of Technology
Joint Education Program
Beijing Jiaotong University
Computer Science, Dual B.Eng
Transportation, Dual B.Eng
We proposed Ising learning algorithm, the first technique to train multilayer feedforward neural networks on Ising machines (quantum computers). The training time is reduced by 90% compared to CPU/GPU.
We proposed FlipNet, a policy network incorporating Jacobian regularization and a Fourier filter layer. It can be used in most actor-critic RL algorithms to obtain smoother action in real-world applications.
We proposed DACER, an online reinforcement learning algorithm that utilizes a diffusion model as the actor network to enhance the representational capacity of the policy.
we proposed a variant of neural ODE, called SmODE, to smooth out control actions in RL. A mapping function is incorporated to estimate the changing speed of system dynamics.
We proposed a policy network for RL with low-pass filtering ability, named Smonet, to alleviate the action nonsmoothness issue by learning a low-frequency representation within hidden layers.
We proposed a smooth policy network (LipsNeXt) and a smooth distributed soft actor-critic (DSAC-S) algorithm to achieve coordinated optimization of control precision and action smoothness.
We proposed LipsNet, a smooth and robust neural network with adaptive Lipschitz constant, to deal with the action fluctuation problem in RL (reinforcement learning).