Hiwonder MentorPi T1 Raspberry Pi Robot Car – Tank Chassis, ROS2 AI Coding Robot with Large AI model ChatGPT, SLAM and Autonomous Driving (Standard Kit With Raspberry Pi 5 4GB)

HiwonderSKU: RM-HIWO-0AH
Manufacturer #: MentorPi T1 Standard Kit With Raspberry Pi 5 4GB

Price  :
Sale price $879.99

Shipping calculated at checkout.

Stock  :
In stock (200 units), ready to be shipped

Payments and Security

American Express Apple Pay Diners Club Discover Google Pay Mastercard PayPal Shop Pay Visa

Your payment information is processed securely. We do not store credit card details nor have access to your credit card information.

Description

  • 【Raspberry Pi 5 & ROS2 Robot Car】 MentorPi is powered by Raspberry Pi 5, compatible with ROS2, and programmed in Python, making it an ideal platform for AI robot development.
  • 【Multiple Chassis Configurations】 Supports both Mecanum-wheel, Tank and Ackermann chassis, allowing flexibility for various applications and meeting diverse user needs.
  • 【Advanced AI Capabilities】Supports SLAM mapping, path planning, multi-robot coordination, vision recognition, target tracking, and more.
  • 【Autonomous Driving with Deep Learning】Using YOLOv5 for road sign and traffic light recognition, helping users explore deep learning-based driving technologies.
  • 【Empowered by Large Al Model, Human-Robot Interaction Redefined】Features multimodal AI with ChatGPT, 3D vision, and voice interaction, enabling advanced embodied AI and natural, context-aware human-robot interaction.

Product Description

MentorPi is a smart robot car powered by Raspberry Pi 5 and supports ROS2. Equipped with tank chassis, high-speed closed-loop encoder motors, Lidar, a 3D depth camera, and large-torque servos, it delivers high-performance capabilities. These include SLAM mapping, path planning, vision recognition, and autonomous driving. With YOLOv5 model training, MentorPi can detect road signs and traffic lights. MentorPi also deploys a Multimodal Large AI Model to support more advanced embodied AI applications. To help you unlock its full potential, we offer comprehensive tutorials and videos designed to inspire and support your Al creative projects.

① Aurora930 Pro depth camera

The 3D depth camera not only enables AI visual functions but also supports advanced features like depth image data processing and 3D visual mapping and navigation.

② Raspberry Pi 5 Controller

MentorPi is powered by Raspberry Pi 5 controller allowing you to embark on motion control, machine vision, and OpenCV projects.

③ Oradar MS200 Lidar

MentorPi is equipped with Lidar, which can realize SLAM mapping and navigation, and supports path planning, fixed-point navigation and dynamic obstacle avoidance.

④ High Performance Encoder Motor

It offers robust force, has a high-precision encoder,and includes a protective end shell to ensure an extended service life.

1) Dual-Controller Design for Efficient Collaboration

① Host Controller

- ROS Controller (JETSON, Raspberry Pi, etc.)

- AI Visual image processing

- Deep neural network

- Human-Machine Voice Interaction

- Advanced Al algorithms

- Simultaneous localization and mapping (SLAM)

② Sub Controller

- ROS expansion board

- High-Frequency PID Control

- Motor Closed-Loop Control

- Servo Control and Feedback

- IMU Data Acquisition

- Power Status Monitoring

1. Integration of Large AI Model with SLAM Mapping and Navigation

MentorPi combines multimodal large model to understand user voice commands via a large language model, enabling multi-point navigation. Once it arrives at the designated location, it uses a vision language model to gain a deep understanding of the surrounding objects and events. This approach greatly enhances the robot’s intelligence, adaptability, and overall user experience, making it better suited to meet real-world needs. The following explanation uses the MentorPi Mecanum wheel version as an example. Its functions are the same as those of the Ackermann version.

1) Semantic Understanding

MentorPi leverages a large language model to accurately interpret and analyze user voice commands, enabling a deeper understanding of natural language intent.

2) Environmental Perception

Powered by a vision language model, MentorPi can interpret objects in its surroundings and understand the spatial layout of the environment.

3) Intelligent Navigation

MentorPi continuously sends environmental data to the vision language model for real-time in-depth analysis. It dynamically adjusts its navigation path based on user voice commands, allowing it to autonomously navigate to designated areas and deliver intelligent, adaptive routing.

4) Scene Understanding

With the support of a vision language model, MentorPi can deeply interpret the semantic information of its environment, including surrounding objects and events within its field of view.

2. Large Model Embodied AI Applications

MentorPi is equipped with a high-performance AI voice interaction module. Unlike conventional AI systems that operate on unidirectional command-response mechanisms, MentorPi leverages ChatGPT to enable a cognitive transition from semantic understanding to physical execution, significantly enhancing the fluidity and naturalness of human-machine interaction. Combined with machine vision, MentorPi exhibits advanced capabilities in perception, reasoning, and autonomous action—paving the way for more sophisticated embodied AI applications.

1) Voice Control

With ChatGPT integration, MentorPi can comprehend spoken commands and carry out corresponding actions, enabling intuitive and seamless voice-controlled interaction.

2) Color Tracking

MentorPi utilizes vision language model analysis to detect and lock onto any object within its field of view. With the integration of a PID algorithm, it achieves precise and real-time target tracking.

3) Vision Tracking

With the advanced perception capabilities of a vision language model, MentorPi can intelligently identify and lock onto target objects even in complex environments, allowing it to perform real-time tracking with adaptability and precision.

4) Autonomous Patrolling

Utilizing semantic understanding from a large language model, MentorPi can accurately detect and track lines of various colors in real time while autonomously navigating obstacles, ensuring smooth and efficient patrolling.

3. Multimodal Large Model Deployment

MentorPi is equipped with a high-performance Al voice interaction module. Unlike conventional Al systems that operate on unidirectional command-response mechanisms, MentorPi leverages ChatGPT to enable a cognitive transition from semantic understanding to physical execution, significantly enhancing the fluidity and naturalness of human-machine interaction. Combined with machine vision, MentorPi exhibits advanced capabilities inperception, reasoning, andautonomous action -- paving the way for more sophisticated embodied Al applications.

1) Large Language Model

With the integration of the ChatGPT Large Model, MentorPi operates like a "super brain"- capable of comprehending diverse user commands and responding intelligently and contextually.

2) Large Speech Model

With the integration of the Al voice interaction box, MentorPi is equipped with speech input and output capabilities-functionally giving it 'ears' and a 'mouth. Utilizing advanced end-to-end speech-language models and natural language processing (NLP) technologies, MentorPi can perform real-time speech recognition and generate natural, human-like responses, enabling seamless and intuitive voice-based human-machine interaction.

3) Vision Language Model

MentorPi integrates with OpenRouter's Vision Large Model, enabling advanced image understanding and analysis.It can accurately identify and locate objects within complex visual scenes, while also delivering detailed descriptions that cover object names, characteristics, and other relevant attributes.

4. Tank chassis differential speed operation

-The tank chassis is composed of high-quality anti-slip tracks, high-precision encoder motors, and a multi-wheel drive system, enabling MentorPi to achieve precise forward movement and turning in any direction.

5. Lidar Function

Mentor Pi is equipped with lidar, which supports path planning, fixed point navgation, navigation and obstacle avoidance, multiple algorithm mapping, and realizes lidar guard and lidar tracking functions.

① Lidar Mapping and Navigation

MentorPi can realize advanced SLAM functins by lidar, including localization, mapping and navigation, path planning, dynamic obstacle avoidance, Lidar tracking and guarding, etc.

② 2D Lidar Mapping Method

TOF Lidar utilizes the SLAM Toolbox for mapping algorithms and supports fixed-point navigation multi-point navigation, as well as TEB path planning.

③ Multi-Point Navigation

MentorPi is equipped with a high-accuracy Lidar that provides real-time environmental detection. It supports both fixed-point navigation and multi-point navigation, making it suitable for complex navigation scenarios.

④ Multi-Robot Cooperation Mapping and Navigation

By leveraging multi-root communicotion and navigation technolcgy, several robots can colaborate to simultaneously map their surroundings. This enables multi-robot navigation, path planning.

⑤ Dynamic Obstacle Avoidance

Using TOF Lidar, MentorPi can detect obstacles during navigation and intelligently plan its path to effectively avoid them.

⑥ Lidar Tracking and Guarding

MentorPi can work with Lidar to scan and subsequently track a moving target ahead. MentorPi utilizes TOF Lidar toscan the secured area. Upon detecting an intruder, it will automatically turn toward the intruder and activate an alarm.

6. 3D Depth Camera Function

① RTAB-VSLAM 3D Vision Mapping & Navigation

Equipped with an Angstrong depth camera, Mentor Pi can effectively perceive envronmental changes, allowing for intelligent Al interaction with humans.

② Depth Map Data, Point Cloud

Through the corresponding APl, MentorPi can get a depth map, color image and point cloud of the camera.

③ Color Recognition and Tracking

Working with OpenCV, MentorPi can track specific color. After you select the color on the APP, it emits light of corresponding color and moves with the object of that color.

④ Target Tracking

Through vision positioning of the target object, the target object can be better targeted and tracked.

⑤ QR Code Recognition

MentorPi can recognize the content of custom QR codes and display the decoded information.

⑥ Vision Line Tracking

MentorPi supports custom color selection, and the robot can identify color lines and track them.

7. Deep Learning, Autonomous Driving

In the ROS system, MentorPi has deployed the deep learning framework PyTorch, the open source image processing library OpenCV and the target detection algorithm YOLOV5 to help users who want to explore the field of autonomous driving image technology easily enjoy Al autonomous driving.

① Road Sign Detection

Through training the deep learning model library, MentorPi can realize the autonomous driving function with Al vision.

② Lane Keeping

MentorPi is capable of recognizing the lanes on both sides to maintain safe distance between it and the lanes.

③ Autonomous Parking

Combined with deep learning algorithms to simulate realscenarios, side parking and warehousing can be achieved.

④ Turning Decision Making

According to the lanes, road signs and traffic lights, MentorPi will estimate the traffic and decide whether to turn.

⑤ YOLO Object Recognition

Utilize YOLO network algorithm and deep learning model library to recognize the objects.

⑥ MediaPipe Development, Upgraded Al Interaction

MentorPi utilizes MediaPipe development framework to accomplish various functions, such as fingertip recognition, humanbody recognition, 3D detection, and 3D face detection.

8. Open Source Python Programming

MentorPi supports python programming. All AI intelligent Python code is open source, with detailed annotations for easy self-study.

9. Wireless Handle Control

MentorPi supports wireless handle control and can connect to the robot via Bluetooth to control the robot in real time.

10. App Control

WonderPi app supports Android and iOS. Switch game modes easily and quickly to experience various AI games.

1* T1 (Track) Chassis (Assembled; Battery included)

1* Backet set

1* Raspberry Pi 5 (4GB)

1* 64GB SD card

1* Cooling fan

1* Raspberry Pi power supply cable

1* RRC Lite controller + RRC data cable

1* Battery cable

1* Lidar + 4PIN wire

1* 8.4V 2A charger (DC5.5*2.5 male connector)

1* Aurora930 Pro depth camera

1* USB to Type-C data cable

1* Wireless controller

1* Controller receiver

1* EVA ball (40mm)

1* Card reader

1* 3PIN wire (100mm)

1* Screw bag

1* Screwdriver

1* User manual

27.8*19.5*18.2 cm

Model: MentorPi T1(Depth camera version)

Chassis type: Tank chassis

Size: 27.8*19.5*18.2cm

Weight: 1.88kg

Motor: Hall encoder DC geared motor

Encoder: AB-phase incremental Hall encoder

Material: Full metal aluminum alloy chassis, anodizing process

ROS controller: RRC Lite controller + Raspberry Pi 5 controller

Camera: Aurora930 Pro 3D depth camera

Lidar: Oradar MS200

Battery: 7.4V 2200mAh 10C LiPo battery with protection board (Continuous operating time: up to 60 minutes)

OS: Raspberry Pi OS + Ubuntu 22.04 LTS + ROS2 Humble (Docker)

Software: iOS/ Android app

Communication method: WiFi/ Ethernet

Programming language: Python/ C/ C++/ JavaScript

Storage: 64GB TF card

Servo: LFD-01 anti-stall servo (monocular camera version)

Materials: Development tutorials; video tutorials; ROS source code, system image and software

Package size & weight: around 3.2kg; 39.7*24.4*22cm

Customer Reviews

Be the first to write a review
0%
(0)
0%
(0)
0%
(0)
0%
(0)
0%
(0)

Estimate shipping

You may also like