- TurboPi Advanced Kit runs on the ROS2 and leverages Python and OpenCV to deliver efficient AI processing and a wide range of robotic applications.
- Intergrated multimodal AI model and voice interaction, TurboPi enables smart conversations, environmental awareness, and flexible task execution.
- Equipped with a 2-DOF HD camera, TurboPi offers FPV video feedback, object and color recognition, line following, and autonomous driving features.
- Featuring a robust metal chassis and Mecanum wheels, TurboPi can move in any direction and rotate on the spot, adapting smoothly to various scenarios.
- Full Python source code, diverse experiment examples, and detailed course materials help you master AI and programming while inspiring endless innovation.
Product Description
TurboPi is an open-source AI vision car designed for beginners, powered by the Raspberry Pi. It features a Mecanum-wheel chassis for omnidirectional movement, a 2-DOF HD wide-angle camera, and supports Python programming with OpenCV and YOLOv5 for image processing and object detection. TurboPi enables a range of intelligent functions such as color recognition, object tracking, and autonomous driving.
1) 360° Omnidirectional Movement
With 4 omnidirectional mecanum wheels, TurboPi can move 360°. Different movement modes (move forward, horizontally, diagonally and rotate ) and excellent perform-ance make it bold to challenge various complicated routes.
2) 2DOF Pan-tilt for 360° View
Equipped with two anti-blocking servos, pan-tilt can rotate 130°vertically and 180° horizontally, which contributes to 360° camera field of view.
1. AI Vision Recognition, Target Tracking
TurboPi uses OpenCV as image processing library to identify target items and complete Al games such as face tracking, color tracking, QR code recognition, gesture recognition, object recognition, and line following.
1) First Person View, HD Transmitted Image
TurboPi supports LAN and WIFi direct connection modes. After WIFi connection, the first person view will be transmited to APP interface, which brings you more exciting and real robot control experience!
2) Color Recognition and Tracking
Working with OpenCV, TurboPi can track specific color. After you select the color on the APP, it emits light of corresponding color and moves with the object of that color.
3) Target Tracking, Different Tracking Modes
TurboPi tracks the target wihin its view with OpenCV. lt supports two tracking modes. You can make pan-tilt or pan-tilt together with car body move with the target.
4) Face Detection and Tracking
TurboPi can track human face within its field of view, and move with it.
5) QR Code Control
TurboPi can recognize and read QR code to perform related actions.
6) Gesture Recognition, Man-robot Interaction
TurboPi cooperates with OpenCV to count fingers. Then interact with you based on the number of finger, such as honk horn, twist and change color of light.
7) Vision Line Following
To achieve line following, it extracts ROI area with OpenCV first. Then remove noise to locate the line on binary image. Lastly, use PID algorithm to perform direction calibration.
2. Large Model Embodied AI Applications
The Advanced Kit is equipped with a high-performance AI voice interaction module. Unlike conventional AI systems that operate on unidirectional command-response mechanisms, TurboPi leverages ChatGPT to enable a cognitive transition from semantic understanding to physical execution, significantly enhancing the fluidity and naturalness of human-machine interaction. Combined with machine vision, TurboPi exhibits advanced capabilities in perception, reasoning, and autonomous action—paving the way for more sophisticated embodied AI applications.
1) Vision Tracking
With the advanced perception capabilities of a vision language model, TurboPi can intelligently identify and lock onto target objects even in complex environments, allowing it to perform real-time tracking with adaptability and precision.
2) Voice Control
Powered by ChatGPT, TurboPi is capable of semantic understanding and executing corresponding actions, enabling smooth and natural voice control.
3) Autonomous Patrolling
Utilizing semantic understanding from a large language model, TurboPi can accurately detect and track lines of various colors in real time while autonomously navigating obstacles, ensuring smooth and efficient patrolling.
4) Scene Understanding
Leveraging OpenAI's ChatGPT model, TurboPi is capable of understanding user commands and performing semantic analysis of visual scenes within its field of view. It can interpret image content and features, delivering contextual feedback via both text and speech.
5) Smart Home Assistant
Leveraging the multimodal model deployed on its body, TurboPi is capable of recognizing and analyzing objects within its field of view. Combined with ChatGPT, it can understand user commands and execute corresponding actions and responses.
6) Smart Obstacle Avoidance
TurboPi uses an ultrasonic sensor to detect obstacles in real time and leverages a large language model to interpret commands with semantic understanding, enabling it to respond with context-aware actions.
3. Multimodal Models Deployment
TurboPi integrates a Multimodal Large AI Model and supports online deployment via OpenAI's API, enabling real-time access to advanced AI capabilities. It also allows seamless switching to alternative models, such as those available through OpenRouter, to support Vision Language Model applications. At its core, TonyPi Pro is designed as an all-in-one interaction hub built around ChatGPT, enabling sophisticated embodied AI use cases and creating a smooth, intuitive human-machine interaction experience!
1) Large Language Model
With the integration of the ChatGPT Large Model, TurboPi operates like a"super brain"-capable of comprehending diverse user commands and responding intelligently and contextually.
2) Large Speech Model
With the integration of the Al voice interaction box, TurboPi is equipped with speech input and output capabilities-functionally giving it'ears' and a'mouth.' Utilizing advanced end-to-end speech-language models and naturall anguage processing (NLP) technologies, TurboPi can perform real-time speech recognition and generate natural, human-like responses, enabling seamless and intuitive voice-based human-machine interaction.
3) Vision Language Model
TurboPi integrates with OpenRouter's Vision Large Model, enabling advanced image understanding and analysis. It can accurately identify and locate objects within complex visual scenes, while also delivering detailed descriptions that cover object names, characteristics, and other relevant attributes.
4. YOLO Deep Learning, Unlock the World of Al Autonomous Driving
Powered by the YOLOv5 deep learning framework and the OpenCV image processing library, TurboPi supports real-time image analysis and multi-object detection. It offers a practical introduction to key technologies in autonomous driving, making it easier to explore Al vision and machine learning in creative, hands-on projects.
1) Autonomous Line Following
TurboPi features a line follower with four high-precision infrared probes, allowing it to accurately detect paths and navigate smoothly along designated routes.
2) Road Sign Recognition
TurboPi leverages a high-definition wide-angle camera and the YOLOv5 deep learning framework to identify road signs and respond with appropriate turning maneuvers—enabling more intelligent and interactive navigation.
3) Traffic Light Recognition
TurboPi leverages the OpenCV vision library to identify traffic light colors within its field of view, enabling it to respond by moving forward or stopping as appropriate.
5. 4-CH Line Following
TurboPi is equipped with 4-CH line follower, each consisting of high-precision infrared detectors. This setup enables accurate line following, curve detection, and intersection recognition for smooth and intelligent navigation.
1) High-Precision 4-CH Line Tracking
TurboPi's 4-CH line follower utilizes a high-density infrared array, allowing it to accurately detect lines from as narrow as 0.5 cm up to 6 cm wide. This versatility makes it well-suited for a wide range of line-following environments and applications.
2) Enhanced Line Following for Complex Paths
Equipped with a 4-CH line follower, TurboPi can reliably detect and navigate complex intersections like right angles, T-junctions, and crossroads, making it well-prepared to tackle advanced line-following challenges.
6. Hardware Features
1) Hard Aluminium Alloy Chassis
It protects the core control board from shattering and shocking and can bear bigger load!
2)High Performance Pan Tilt Servo
2DOF pan tilt is equipped with micro anti-blocking servo which has higher accuracy and longer service life.
3) HD Wide Angle Camera
Loaded with HD wide-angle camera, TurboPi can look around 180°, and up and down 180°. Combined with mecanum chassis car, its visual range is up to 360°.
4) 4-channel Line Follower
4-Channel line follower with IIC interface enables TurboPi to detect black line without occupying Raspberry Pi CPU.
5) Powered by Raspberry Pi
TurboPi is powered by Raspberry Pi 5 or 4B controller allowing you to embark on motion control, machine vision, and OpenCV projects.
6) Easy to build
TurboPi is easy to build with instruction. You can enjoy the fun of building by build a robot.
7) APP Control
WonderPi APP supports Android and iOS. Switch game modes easily and quickly to experience various AI games.
8) Unlimited Creativity
TurboPi can be expanded with various electronic modules and LEGO blocks to open up diverse interesting robot games.