- Powered by NVIDIA Jetson Nano and based on ROS
- Support depth camera and Lidar for mapping and navigation
- Upgraded inverse kinematics algorithm
- Capable of deep learning and model training
- Note: This is JetHexa Advanced Kit and two versions are available. JetHexa Standard Kit is equipped with monocular HD camera, while JetHexa Advanced Kit is loaded with Lidar and depth camera.
Tutorial Link for JetHexa: https://drive.google.com/drive/folders/1YY5vy4sfUiNDoEkvPir-d634UZdzCQuT?usp=sharing
Reminder: mapping and navigation functions are only applicable to JetHexa advanced kit.
This link is for JetHexa Advanced Kit: https://www.robotshop.com/en/hiwonder-jethexa-ros-hexapod-robot-kit-powered-by-jetson-nano-with-lidar-depth-camera-support-slam-mapping-navigation-advanced-kit.html
JetHexa is an open source hexapod robot based on Robot Operating System (ROS). It is armed with high-performance hardware, such as NVIDIA Jetson Nano, intelligent serial bus servos, Lidar and HD camera/ 3D depth camera, which can implement robot motion control, mapping and navigation, tracking and obstacle avoidance, custom prowling, human feature recognition, somatosensory interaction and other functions.
Adopted novel inverse kinematics algorithm, supporting tripod and ripple gaits and with highly configurable body posture, height and speed, JetHexa will bring user ultimate using experience.
JetHexa not only serves as an advanced platform for user to learn and verify hexapod movement, but also provides solutions for ROS development. To help user embark on a new journey of ROS hexapod robotic world, ample ROS and robot learning materials and tutorials are provided.
1. Jetson Nano Control System
NVIDIA Jetson Nano is able to run mainstream deep learning frameworks, such as TensorFlow, PyTorch, Caffe/ Caffe2, Keras, MXNet, and provides powerful computing power for massive AI projects. Powered by Jetson Nano, JetHexa can achieve image recognition, object detection and positioning, pose estimation, semantics segmentation, intelligent analysis and other almighty functions.
2. Monocular Camera (with 2DOF Pan-tilt)
Monocular camera can rotate up, down, right and left, as well as realize color tracking, autonomous driving and so on.
3. 3D Depth Camera
Depth camera can process depth map data, and realize 3D vision mapping navigation.
4. ROS Highlights
1) 2D Lidar Mapping, Navigation and Obstacle Avoidance:
JetHexa is loaded with high-performance EAI G4 Lidar that supports mapping with diverse algorithms including Cartographer, Hector, Karto and Gmapping, path planning, fixed-point navigation as well as obstacle avoidance in navigation.
2) RTAB-VSLAM 3D Vision Mapping and Navigation:
Supporting 3D color mapping in two ways, pure RTAB vision and fusion of vision and Lidar, JetHexa is able to navigate and avoid obstacle in 3D map and execute global relocation.
3) Multi-point Navigation and Obstacle Avoidance:
Lidar can detect the surroundings in real time, and let JetHexa avoid the obstacles during muti-point navigation.
4) Depth Image Data, Point Cloud Image:
Through the corresponding API, JetHexa can obtain depth image, color image and point cloud image of the camera.
5) KCF Target Tracking:
Based on KCF filtering algorithm, the robot can track the selected target.
6) Depth Camera Obstacle Recognition:
With the help of depth camera, it can detect the obstacle ahead and pass through the obstacle.
7) Custom Path Prowling:
User can customize the path and order the robot to prowl along the designed path.
8) Lidar Tracking:
By scanning the front moving object, Lidar makes robot capable of target tracking.
9) Lidar Guarding:
Lidar accounts for the role in guarding the surroundings and ringing the alarm when detecting intruder.
10) Color Recognition and Tracking:
Skilled in color recognition and tracking, the robot can be set to execute different actions according to the colors.
11) Group Control:
A group of JetHexa can be controlled by only one wireless handle to perform actions uniformly and simultaneously.
12) Intelligent Formation:
A batch of robots can be controlled to patrol in different formations.
13) Canyon Crossing:
When Lidar scans the canyon ahead, the robot will adjust its posture and direction to pass through it.
14) Auto Line Following:
The robot has the ability to recognize the line in color designated by user and prowl following the line.
15) Tag Recognition and Tracking:
JetHexat is an expert in recognizing and positioning a few AR Tags at the same time.
16) Posture Detection:
Built-in IMU sensor can detect the body posture in real time.
5. Upgraded Inverse Kinematics Algorithm (Tripod Gait/Ripple Gait):
One-click Gait Switching:
JetHexa supports switching between tripod gait and ripple gait at will.
1) "Moonwalk" in Fixed Speed and Height:
Through inverse kinematics algorithm, JetHexa can maintain stable during SLAM mapping, and moonwalk in a constant speed.
2) Pitch Angle and Roll Angle Adjustment:
Highly configurable body posture, center of gravity, pitch angle and roll angle enables the hexapod robot to overcome all type of complicated terrains.
3) Direction, Speed, Height and Stride Adjustment:
JetHexa can make turn and change lane as moving, and support stepless adjustment in linear velocity, angular velocity, stance, height and stride.
4) Body Self-balancing:
The built-in IMU sensor is in charge of detecting the body posture in real time so as to arrange for the robot to adjust its joints to balance the body.
6. Deep Learning and Model Training for AI Creativity
Adopting GoogLeNet, Yolo, mtcnn and other neural networks, JetHexa masters deep learning to train models. Through loading various models, it can recognize the targets quickly so as to implement complex AI projects, including waste sorting, mask identification, emotion recognition, etc.
1) Mask Identification:
With strong computing power, JetHexa’s AI function can be expanded through deep learning.
2) Waste Sorting:
Quick to recognize different waste cards, and place them in the corresponding area in terms of the category.
3) Emotion Recognition:
JetHexa is able to recognize facial features accurately to catch every nuance of expression.
6. MediaPipe Development, Upgraded AI Interaction
Based on MediaPipe framework, JetHexa can carry out human body tracking, hand detection, posture detection, overall detection, face detection, 3D detection and more.
1) Fingertip Trajectory Control
2) Human Posture Control
3) Gesture Recognition
4) 3D Face Detection
7. Gazebo Simulation
JetHexa employs ROS framework and supports Gazebo simulation. Gazebo brings a fresh approach for you to control JetHexa and verify the algorithm in simulated environment, which reduces experimental requirements and improves efficiency.
1) Body Control Simulation:
Verify the kinematics algorithm in simulation so as to avoid the damage to the robot due to the algorithm error.
2) Visual Data:
Visual data is provided for the observation of the robot end and trajectory of the center of gravity to optimize the algorithm.
8. Various Control Methods:
1) WonderAi APP
2) Map Nav APP (Android Only)
3) PC Software
4) Wireless Handle
-Various Control Methods:
-WonderAi APP
-Map Nav APP (Android Only)
-PC Software
-Wireless Handle