Alvin Zhu

B.S Computer Engineering,
University of California, Los Angeles
Research Assistant at UCLA Robotics and Mechanisms Laboratory (RoMeLa)
I have published 5 research papers, 4 to prestigious IEEE robotics conferences, and played key roles in multiple cutting-edge projects over the past year and a half as part of a graduate robotics research lab. My research and projects lie at the intersection of robotics hardware, reinforcement learning, simulation, and robotic perception, with a focus on developing intelligent and adaptive robotic systems. I am passionate about leveraging reinforcement learning to optimize control policies for dynamic platforms like legged robots and bridging the sim-to-real gap for zero-shot reinforcement learning. My research integrates reinforcement learning to optimize control policies for dynamic platforms, and also developing innovative networks that achieve 99% accurate torque prediction for improved sim-to-real transfer. Additionally, I explore deep learning and vision transformers for vision-based tasks, such as object detection and decision-making, to enhance robotic performance in unstructured environments.
Robotics Projects Overview
Humanoid Locomotion using Deep Reinforcement Learning
I'm developing a GPU-accelerated simulation and training pipeline for the humanoid robot BRUCE using MuJoCo-MJX for deep reinforcement learning locomotion. This pipeline enables BRUCE to achieve dynamic and stable movement, even under external disturbances, by leveraging deep RL algorithms. Writing research paper on zero-shot reinforcement learning for torque-controlled actions and integrating LLMs into the training loop for automation.
Advanced Robotic Perception System for Humanoid Soccer
I developed the full perception stack for the humanoid robot ARTEMIS, enabling full spatial awareness in dynamic RoboCup soccer environments. It integrates the YOLOv8 deep learning model and classical computer vision algorithms with point clouds for object detection, 3D pose estimation, and proximal object detection robust to heavy amounts of noise, allowing ARTEMIS to aim and score 45 goals in 6 seated matches and overthrow the reigning champions 6 goals to 1.
Object Segmentation using Vision Transformers and Deep Learning models
I integrated the Segment Anything Model (SAM) vision transformer with custom YOLOv8 detection weights to provide 95% accurate segmentation of slide handles and stairs. The segmented object's positions are extracted from the Intel Realsense D435 camera's point cloud for use in simultaneous locomotion and grasping.
Cost Efficient 3D Printed Robot Dog
The robot dog project involved designing and developing a fully functional quadruped robot. I 3D modeled and manufactured the upgraded big dog, ensuring a compact and efficient mechanical design optimized for strength and cost efficiency. I implemented a PID control system integrated with an IMU to enable real-time balance and stability.
Selected Publications
ICRA
ICRA
AeroConf
Cycloidal Quasi-Direct Drive Actuator Designs with Learning-based Torque Estimation for Legged Robotics
Alvin Zhu*, Yusuke Tanaka*, Fadi Rafeedi, and Dennis Hong
Accepted First Author to 2025 IEEE International Conference on Robotics and Automation (ICRA)
Mechanisms and Computational Design of Multi-Modal End-Effector with Force Sensing using Gated Networks
Yusuke Tanaka*, Alvin Zhu*, Richard Lin, Ankur Mehta, and Dennis Hong
Accepted Co-First Author to 2025 IEEE International Conference on Robotics and Automation (ICRA)
Tethered Variable Inertial Attitude Control Mechanisms through a Modular Jumping Limbed Robot
Yusuke Tanaka, Alvin Zhu, Dennis Hong
Accepted Second Author to 2025 IEEE Aerospace Conference (AeroConf)