- Developed for ROS education, features Python support and deep learning with MediaPipe for AI projects like object recognition and voice interaction
- With a 6DOF design and 35KG torque servos, JetArm includes an HD camera for first-person object gripping
- Equipped with a 3D depth camera, JetArm utilizes RGB+D fusion for flexible 3D grabbing and AI applications
- JetArm ultimate kit features a microphone array and speaker for voice-controlled tasks, including navigation and gripping
- Provides multiple control methods, like WonderAi app (compatible with iOS and Android system), wireless handle, Robot Operating System (ROS) and keyboard
Product Description
JetArm is a desktop-level AI vision robotic arm developed by Hiwonder for ROS education scenarios. It is equipped with a 3D depth camera, combines 3D vision technology with robotic arm control, and is equipped with high-torque intelligent bus servos, NVIDIA Jetson Nano master control, High-performance hardware such as a 7-inch touch screen, far-field microphone array, and speakers can not only realize three-dimensional motion control of the robot, but also identify, track, and grab target objects in three-dimensional space.
1) Depth Vision, 3D Scene Flexible Grabbing
The end of the JetArm robot arm is equipped with a high-performance 3D depth camera, which can realize target recognition, tracking and grabbing. Through RGB+D fusion detection, JetArm can also realize flexible grabbing in 3D scene.
2) All-metal Structure, Bearing Base
The body of the robot arm adopts an all-metal structure, and the surface is anodized, making it exquisite and beautiful. The base uses industrial-grade bearings to meet high-demand grabbing projects.
3) Wrapped Structure Design, Beautiful Wiring
JetArm adopts a wrapped structure design, and the wiring of the servo can be hidden inside the fuselage, making the outside of the fuselage clean and tidy.
4) Circular Microphone Array
The circular microphone array is divided into a microphone array and a module motherboard. It has stronger overall performance and a sound pickup range of up to 10m.
1. 3D Depth Vision Al Upgraded Interaction
Equipped with a Gemini plus 3D depth camera, JetArm can effectively perceive environmental changes, allowing for intelligent Al interaction with humans.
1) 3D Depth Point Cloud Recognition
Through the corresponding APl of the depth camera, JetArm can obtain the depth map, color map and point cloud map of the detection environment, and then obtain the RGB data, position coordinates, and depth information of the target item to achieve shape recognition, color sorting, height measurrement, material detection, etc.
2) Depth Camera Distance Ranging
By capturing depth point cloud data of an object, the depth camera can determine the distance betweenthe object and the camera. This enables accurate object localization, sorting, and tracking.
3) Regional Target Height Measurement
By obtaining the depth point cloud data of the object, the height of the object can beidentified, thereby realizing the gameplay of removing highly abnormal objects.
4) Regional Target Volume Measurement
By capturing depth point cloud data, the depth camera can measure the volume of an object after recognizing its shape and height.
2. Al Vision Recognition Target Tracking
JetArm's 3D depth camera is equipped with an RGB lens. The robot arm uses OpencV as the image processing library, supports Al intelligent image recognition, and can realize a varietyof intelligent vision gamep such as color recognition and tag recognition.
1) Color Sorting
JetArm can recognize and sort color blocks of different colors.In addition to standard colors, JetArm can also recognize a variety of custom colors.
2) Tag Recognition, Intelligent Stacking
JetArm can recognize different AprilTags and determine the position of the tag block to achieve intelligent stacking.
3) Target Tracking
JetArm can locate and track targets, cnd we can also use machine learning to let JetArm track more trained target items.
3. RGB+D Integration for 3D Spatial Random Grasping
JetArm's 3D depth camera combines RGB and depth data to capture both the object's color and depth point cloud information, enhancing the spatial representation. Using the inverse kinematics algorithm, JetArm can perform advanced Al tasks such as 3D spatial random object grasping, sorting, and transport,enabling more complex and dynamic projects.
1) RGB+D 3D Spatial Random Grasping
2) Voice-Controlled 3D Spatial Grasping and Sorting
4. Upgraded Inverse Kinematics Algorithm
JetArm has a high-level inverse kinematics algorithm, which can move to any coordinate in 3D scene, and the path planning of the robot arm can also be realized by Python programming.
1) Target Detection, Joint Adaptive Adjustment
JetArm can detect target items within the recognition area and calculate the position coordinates and placement angle of the target item. Combined with the inverse kinematics algorithm of the robot arm, each joint angle is adaptively adjusted to achieve free grabbing.
2) 3D Scene Motion Control
JetArm can use inverse kinematics algorithms to achieve linear motion and path planning in 3D scene.
3) Provides Source Code for DH Model and Inverse Kinematics
Provide the inverse kinematics analysis, coordinate DH model and inverse kinematics function source code of the JetArm robot arm, and input the end coordinates of the robot arm greatly shortening the project development time.
5. Deep Learning Model Training
JetArm uses neural networks such as GoogLeNet, Yolo, and mtcnn, which can perform deep learning on the target to generate a trained model.
1) MediaPipe Development, Upgraded Al Interaction
JetArm utilizes MediaPipe development framework to accomplish various functions, such as human body recognition, fingertip recognition, face detection, and 3D detection.
2) Fingertip Trajectory Control
Based on the detection ofthe distance between fingertips, JetAmm can perform correspending actions.
3) Waste Sorting
JetArm's kit is equipped with garbage pattern blocks. By loading the corresponding model, JetArm can quickly recognize different garbage and place it in the corresponding classification area.
4) Item Sorting
By training models of daily items and generating correspond-ing models, with the support of depth camera, JetArm can quickly recognize and grab corresponding items by obtaining the depth information of the items.
5. Gazebo Simulation
The JetArm robotic arm is developed using the ROS framework and supports GAZEBO simulation. The robotic arm is controlled and algorithm verified in a virtual environment, which reduces the requirements for the experimental environment and improves experimental efficiency.
6. Various Control Methods
1) WonderAi App
2) PC Software
3) Wireless Handle
7. JetArm Configuration Selection Guide
The Jetson Nano provides essential AI inference capabilities, perfect for simple neural networks and deep learning models, making it ideal for entry-level AI applications. The Jetson Orin Nano offers 80 times the performance and computing power of the Jetson Nano, supporting more complex deep learning models and accelerated inference, making it well-suited for advanced edge AI applications. The Jetson Orin NX boosts computing power 2.5 times over the Jetson Orin Nano, offering top-tier AI inference performance for larger neural networks and real-time deep learning tasks. This performance upgrade enhances JetArm, unlocking exceptional capabilities in servo control, vision tasks, and deep learning across a wide range of applications!