I am Yongbo Qian, a research associate advised by Prof. Daniel D. Lee and Prof. Kostas Daniilidis in the GRASP(General Robotics, Automation, Sensing and Perception) Laboratory. My research interests are Computer vision and Robotics, and I have a wide range of robotics related experiences including robotic perception, learning, planning and multi-robot coordination. I obtained the Master's of Science in Engineering Degree in Robotics (May, 2016) from the University of Pennsylvania.
Before coming to PENN, I received my B.S. Degree in Electrical Engineering with a minor of Computer Science in 2014, at the University of Illinois at Urbana-Champaign.
I can be reached via email at: yongbo (at) seas (dot) upenn (dot) edu
Research is the core part of my work, and I am proud to be involved in different successful teams.
My work is mainly on the vision and localization systems of humanoid robots, focusing on the problems of how robots can robustly recognize objects under inconsistent lighting conditions, and how to accurately localize them on the soccer field based on visual features. Besides that, I am also managing the team from recruiting and training new members, designing and overseeing the projects, to communicating with the university and the whole RoboCup community.
We have our open source code available through GitHub repository UPenn-RoboCup.
I am the UPenn integration lead in the Robotics Collaborative Technology Alliance (RCTA) program! I have been working closely with General Dynamics to integrate a real-time deep learning based object detection and pose estimation module on mobile robots for the task of room search and manipulation.
Remote Visual Inspection (2013-2014)
I was a research assistant at the Computer Vision and Robotics Laboratory in Beckman Institute. I worked on designing motor control on a custom-buit track vehicle for video acquisition, calibrating a customized hemispherical camera and using machne vision techniques to monitor the railway track condition. This project was supervised by Dr. John M. Hart, Prof. Narendra Ahuja and sponsored by the Federal Railroad Administration.
State Farm R&D (2014)
I was an IT/System intern at State Farm Research and Development Center, working on a confidential project through implementing computer vision techniques on Android devices.
Respiratory Rate Detection (2012)
I was a visiting undergraduate researcher advised by Prof. Pai-Chi Li in the Ultrasonic Imaging Lab at National Taiwan University, working on a human respiratory motion detection and estimation algorithm using ultra-wideband radar.
Yongbo Qian, Xiang Deng, Alex Baucom, Daniel Lee. "The UPennalizers RoboCup Standard Platform League Team Description Paper 2017." The 21st RoboCup International Symposium. 2017. [PDF]
Yongbo Qian, Alex Baucom, Daniel Lee. "Perception and Strategy Learning in Robot Soccer." The 11th Workshop on Humanoid Soccer Robots at 16th IEEE-RAS International Conference on Humanoid Robots. 2016. [PDF]
Yongbo Qian, Daniel Lee. "Adaptive Field Detection and Localization in Robot Soccer." The 20th RoboCup International Symposium. 2016. [PDF]
Yongbo Qian, et. al. "The UPennalizers RoboCup Standard Platform League Team Description Paper 2016." The 20th RoboCup International Symposium. 2016. [PDF]
Yongbo Qian. "Remote Visual Inspection of the Track and Right-of-way: Verification Hemispherical Camera Output for Computer Vision Processing." The Illinois Digital Environment for Access to Learning and Scholarship. 2014. [Link]
I was working on setting up the course site, designing assessments and auto-grader, creating supplemental course videos and managing the discussion forum for 70,000+ students from all around the world.
I love learning and applying knowledge through implementing projects. This portfolio contains a few projects I am most proud of.
This is a semester-long project for Intergrated Intelligence for Robotics course. We designed and built an intelligent personal assistant robot SNAP, with software intergration of various modules including object detection, speech recognition, manipulation, planning, SLAM, and human-robot interaction using ROS. The robot is capable of navigating in the indoor environment and execute certain tasks such as object search and retrieval. This work was a prototype of designing a low-cost platform for Autonomous Mobile Service Robots.
SLAM for Humanoid Robot
This is a project for Learning in Robotics course. A particle filter based SLAM (Simultaneous Localization and Mapping) was implemented to map out different indoor environment. Sensor data was collected from LIDAR, gyro and odometry on the humanoid robot THOR, which competed in the DARPA Robotics Challenge.
Path Planning on Aerial Maps
Inspired by Google Maps, I built my own route planner on an aerial map. This was implemented by first creating a bag of features from the map image, then using an imitation learning algorithm (Ratliff et al.) to construct the optimal cost map based on those features, and finally running A* search to find the shortest path between two points selected by users.
Fly a Quadrotor!
This is the project for Advanced Robotics course. We implemented a PD controller with a path planning (A*) and minimum jerk trajectory generation algorithm on KMel Nano+ quadrotor platform. The quadrotor could autonomously navigate through a series of waypoints to follow a certain trajectory and avoid obstacles (if existed) in the environment. Vision-based pose estimation and Error State Kalman Filters (ESKF) based state estimation and mapping algorithms were also implemented in this course.
Structure from Motion
This is the final project for Machine Perception course. I implemented the full pipeline of Structure from Motion, including two view reconstruction, triangulation, PnP, and bundle adjustment, to reconstruct a 3D point cloud and camera poses of six sequencial images taken by GoPro Hero3 camera.
Bike Sharing Demand Prediction
This is the final project for Machine Learning course. We participated in a Bike Sharing Demand Prediction competition on Kaggle, and ranked among top 25%. Our approach utilized the Random Forest algorithm and the Extra-Trees method.
Shadow Detetion and Removal
This is the final project for Computational Photography course. I developed a method for detecting and removing shadows for better scene interpretation based on images' lighting model.
Besides robotics, I am also passionate about...
Innovation and Design
Design thinking is critical to technology innovation. I am honored to win the 2015 Penn Design Challenge with my team! Organized by Wharton Innovation & Design Club and Austin-based FinTech startup Able Lending, we designed an innovative user interface that creates an effective and meaningful way to connect small businesses and their backers in this challenge.
I love spreading ideas about science and technology. I volunteer regularly through the Franklin Institute, GRASP Lab, and Penn Center for Innovation for a variety of talks, presentations, demonstrations, tours and showcase events. The audience ranges from kids and students from all levels to parents, professionals and local government officials.
I enhanced my problem solving and teamwork skills through different consulting experiences. I was the project manager of the Cube Consulting, which is the first branch of Junior Enterprise in United States. I led my team working with iFoundry to promote innovation in engineering education.
I was also a member of the PBG Healthcare Consulting, helping a Healthcare IT company deliver its technological service to small business market.