- Note: This is JetAuto starter kit equipped with SLAMTEC A1 Lidar. It is available in six distinct versions. For other versions of JetAuto, please refer to the following description
- Powered by NVIDIA Jetson Nano and based on ROS
- Optional depth camera for 3D vision mapping and navigation
- Optional 7 inch touch screen for parameter monitoring and debugging
- Optional 6 microphone array for voice Optional 6 microphone array for voice interaction
Product Description:
JetAuto is an entry-level ROS education robot powered by Jetson Nano. Featuring a Lidar, depth camera and 7-inch screen, JetAuto provides various functionalities, such as robot motion control, mapping and navigation and human feature recognition.
1) 360° Omnidirectional Movement
With 4 omnidirectional mecanum wheels, JetAuto can move 360°. Different movement modes (move forward, horizontally, diagonally and rotate ) and excellent performance make it bold to challenge various complicated routes.
2) Equipped with Lidar & Supports SLAM Mapping Navigation
JetAuto is equipped with lidar, which can realize SLAM mapping and navigation, and supports path planning, fixed-point navigation and dynamic obstacle avoidance.
3) DC Geared Motor
It offers robust force, has a high-precision encoder,and includes a protective end shell to ensure an extended service life.
4) 7-inch HD LCD Touch Screen
With a resolution of 1024 x 600 pixels and compatible with NVIDIA, this screen allows you to freely monitor and debug various parameters of the robot.
5) 240° High-performance Pan-tilt
JetAuto's high-precision pendulum suspension structure balances the force exerted on all four wheels, enabling good adaptability to uneven surfaces.
6) 6-CH Far-field Microphone Array
The 6-channel microphone array and speakers support sound source positioning, voice recognition control, voice navigation and other functions.
1. Lidar Mapping Navigation
JetAuto is equipped with lidar, which supports path planning, fixed-point navigation, navigation and obstacle avoidance, multiple algorithm mapping, and realizes radar guard and radar tracking functions.
1) Lidar Positioning
Combining a Lidar self-developed high-precisionencoder and IMU accelero meter sensor data, JetAuto can achieve accurate mapping and navigation.
2) Various 2D Lidar Mapping Methods
JetAuto utilizes varous mapping algorithns such as Gmapping, HectorKarto, and Cartographer. In naddition,it supports path planning, fixed-point navigation and, and obstacle avoidance during navigation.
3) Multi-point Navigation, TEB Path Planning
JetAtuo employs Lidar to detect the surroundings and supports fixed point navigation,multi-point continuous navigation and otherrobot applications.
4) RRT Autonomous Exploration Mapping
Adopting RRT algorithm, JetAuto can complete exploration mapping, save the map and drive back to the starting point autonomously, so there is no need for manual control.
5) Dynamic Obstacle Avoidance
lt supports TEB path planning and is able to monitor the obstacle in real time during navigation. Therefore it can replan the route to avoid the obstacle and continue moving.
6) Lidar Tracking
By scanning the front moving object, Lidar makes robot capable of of target tracking.
2. 3D Vision Al Upgraded Interaction
JetAuto is equipped with a 3D depth camera, supports 3D vision mapping and navigation, and can obtain 3D point cloud images. Through deep learning, it can realize more AI vision interactive gameplay.
1) 3D Depth Camera
Equipped with a Astra Pro Plus depth camera, JeAuto can effectively perceive environmental changes, allowing for intelligent Al interaction with humans.
2) RTAB-VSLAM 3D Vision Mapping and Navigation
Using the RTAB SLAM algorithm, JetAuto creates a 3D colored map, enabling mavigation and obstacle avoidance in a 3D environment. Furthermore, it supports global localization within the map.
3) ORBSLAM2+ORBSLAM3
ORB-SLAM is an open-source SLAM framework for monocular, binocular and RGB-D cameras, which is able to compute the camera trajectory in real time and reconstruct 3D surroundings. And under RGB-D mode the real dimension ofthe object can be acquired.
4) Depth Map Data, Point Cloud
Through the corresponding APl, JetAuto can get depth map, color image and point could of the camera.
3. Deep Learning, Autonomous Driving
With JetAuto, you can design an autonomous driving scenario to put ROS into practice, which enables you to better understand core functions of autonomous driving.
1) Road Sign Detection
Through training the deep learning model library, JetAuto can realize autonomous driving with Al vision.
2) Lane Keeping
JetAuto is capable ofrecognizing the lanes on both sides to maintainsafe distance between it and the lanes.
3) Automatic Parking
Combined with deep learning algorithm, JetAuto can recognize the parking sign, then steers itself into the slot automatically.
4) Turning Decision Making
According to the lanes, road signs and traffic lights, JetAuto will uses and the traffic lights, JetAuto will estimate the traffic and decide whether to turn.
4. MediaPipe Development, Upgraded AI Interaction
JetAuto utilizes MediaPipe development framework to accomplish various functions, such as human body recognition, fingertip recognition, face detection, and 3D detection.
1) Fingertip Trajectory Recognition
2) Human Body Recognition
3) 3D Detection
4) 3D Face Detection
5. AI Vision Interaction
By incorporating artificial intelligence, JetAuto can implement KCF target tracking, line following, color/ tag recognition and tracking, YOLO object recognition and more.
1) KCF Target Tracking:
Relying on KCF filtering algorithm, the robot can track the selected target.
2) Vision Line Following:
JetAuto supports custom color selection, and the robot can identify color lines and follow them.
3) Color/ Tag Recognition and Tracking
JetAuto is able to recognize and track the designated color, and can recognize multiple April Tags and their coordinates at the same time.
4) YOLO Object Recognition
Utilize YOLO network algorithm and deep learning model library to recognize the objects.
6. 6CH Far-field Microphone Array
This 6CH far-field microphone array is adroit at far-field sound source localization, voice recognition and voice interaction. In comparison to ordinary microphone module,it can implement more advanced functions.
1) Sound Source Localization:
Through the 6-microphone array, high-precision positioning of noise reduction sources is achieved. With radar distance recognition, Hiwonder can be summoned at any location.
2) TTS Voice Broadcast
The text content published by ROS can be directly converted into voice broadcast to facilitate interactive design.
3) Voice Interaction
Speech recognition and TTS voice broadcast are combined to realize voice interaction and support the expansion of iFlytek's online voice conversation function.
4) Voice Navigation
Use voice commands to control Hiwonder to reach any designated location on the map, similar to the voice control scenario of a food delivery robot.
7. lnterconnected Formation
Through multi-aircraft communication and navigator technology, JetAuto can realize multi-aircraft formation performances and artificial intelligence games.
1) Multi-vehicle Navigation
Depending on multi-machine communication, JetAuto can achieve multi-vehicle navigation, path planning and smart obstacle avoidance.
2) Intelligent Formation
A batch of JetAuto can maintain the formation, including horizontal line, vertical line and triangle during moving.
3) Group Control
A group of JetAuto can be controlled by only one wireless handle to perform actions uniformly and simultaneously
8. ROS Robot Operating System
ROS is an open-source meta operating system for robots. It provides some basic services, such as hardware abstraction, low-level device control, implementation of commonly used functionality, message-passing between processes, and package management. And it also offers the tools and library functions needed to obtain, compile, write, and run code across computers. It aims at providing code reuse support for robotics research and development.
9. Gazebo Simulation
JetAuto is built on the Robot Operating System (ROS) and integrates with Gazebo simulation. This enables effortless control of the robot in a simulated environment, facilitating algorithm prevalidation to prevent potential errors. Gazebo provides visual data, allowing you to observe the motion trajectories of each endpoint and center. This visual feedback facilitates algorithm enhancement.
1) Body Simulation Control:
Through robot simulation control, algorithm verification of mapping navigation can be carried out to improve the iteration speed of the algorithm and reduce the cost of trial and error.
2) Rviz Shows URDF Model
Provide an accurate URDF model, and observe the mapping navigation effect through the Rviz visualization tool to facilitate debugging and improving algorithms.
10. Various Control Methods
1) WonderAi APP
2) Map Nav APP (Android Only)
3) Wireless Handle