- Based on the Robot Operating System (ROS2), Ackerman steering structure
- Programming in Python
- Supports Jetson NANO,Orin NANO, Orin NX and Raspberry Pi 5 main control boards
- Supports Mobile phone app, wireless handle, ROS2 operating system and computer keyboard remote control methods
- Provides 105 lesson teaching videos and numerous source codes
The Yahboom Rosmaster R2 ROS2 Robot w/ Ackermann Structure (Jetson Standard Version w/o Board) is a mobile car with an Ackerman steering structure developed based on the Robot Operating System (ROS2). It supports Jetson NANO,Orin NANO, Orin NX and Raspberry Pi 5 as its main control boards. The R2 is equipped with high performance hardware configurations, such as a laser radar, a depth camera, a voice interaction module, and 520 motor racing rubber tires, which can realize robot mapping navigation, obstacle avoidance, automatic driving, human feature action recognition, voice interaction control, and other functions. It also supports mobile phone APP, wireless handle, ROS2 operating system, computer keyboard, and other remote control methods. Furthermore, users will be provided with 105 lesson teaching videos and numerous source codes for reference.
Professional Ackerman Steering Structure
1. It utilizes the steering method of modern cars and an aluminum alloy Ackerman chassis structure, one HQ metal steering gear, and two 520 motors.
2. Professional racing tires are used, and anti collision strips are installed in the front of the car body, making it suitable for robot car competitions.
Excellent Hardware Configuration
1. The multi functional expansion board is compatible with four motherboards: Jetson NANO,Orin NANO, Orin NX, and Raspberry Pi 5.
2. Various accessories, such as lidar (SLAM A1/YDLIDAR 4ROS), depth camera, voice interaction module, and a 7 inch display screen, can be selected by users.
Based on the Robot Operating System (ROS2)
1. The MediaPipe development framework is used to complete hand and face detection, 3D detection, and recognition functions.
2. It can easily obtain depth image data and point cloud images, complete depth learning, and use algorithms to achieve various AI functions.
Multiple Functions and Remote Control Methods
1. The R2 Robot can realize 3D visual mapping navigation, radar obstacle avoidance, visual recognition, target tracking, voice recognition interaction, and other functions.
2. It supports mobile phone APP, wireless handle, ROS operating system, computer keyboard, JupyterLab web page programming control.