Hiwonder PuppyPi Pro ROS Quadruped Robot, Integrated with AI Large Model (ChatGPT), Supports AI Vision, Voice Interaction, LiDAR, and Robotic Arm Attachment (Pro Kit/ Raspberry Pi 4B 4GB)

HiwonderSKU: RM-HIWO-033
Manufacturer #: PuppyPi Pro with Raspberry Pi 4B 4GB

Price  :
Sale price $1,289.99

Shipping calculated at checkout.

Stock  :
In stock (119 units), ready to be shipped

Payments and Security

American Express Apple Pay Diners Club Discover Google Pay Mastercard PayPal Shop Pay Visa

Your payment information is processed securely. We do not store credit card details nor have access to your credit card information.

Description

  • PuppyPi runs on Raspberry Pi 4B/5 and supports ROS1/ROS2. With Python and open-source code, it enables powerful AI computing and flexible robotics development.
  • With ChatGPT and AI vision, PuppyPi delivers intelligent perception, reasoning, and natural human-robot interaction using a multimodal model.
  • Equipped with 8 high-torque servos and a link-structure leg design, PuppyPi uses inverse kinematics for fast, stable, and precise multi-joint movement.
  • HD camera enables color recognition, face detection, tracking, ball kicking, line following, and MediaPipe-based gesture control for interactive AI tasks.
  • Supports TOF Lidar and robotic arm expansion for SLAM, obstacle avoidance, and precise object grasping—ideal for advanced AI exploration.

Product Description

PuppyPi is an AI-powered quadruped robot dog based on the Raspberry Pi 4B/5. It is equipped with 8 high-performance coreless servos and features a link structure design in its legs, enabling flexible and dynamic movement. Running on both ROS1 and ROS2 operating systems, PuppyPi supports Python programming and integrates AI vision capabilities, such as target tracking and face detection, to support a variety of creative AI applications. With its multimodal AI model, camera, and AI voice interaction module, PuppyPi can understand its surroundings and perform tasks with agility, enabling advanced embodied AI applications.

1) 8DOF AI Quadruped Robot

PuppyPi features 8 degrees of freedom (DOF) and is equipped with high-performance coreless servos, delivering precise and flexible motion control to execute complex actions with accuracy.

2) Powered by Raspberry Pi

PuppyPi is powered by the Raspberry Pi control system and equipped with a 2DOF HD wide-angle camera. Combined with OpenCV image processing and Python programming, it enables the implementation of a wide range of AI projects.

1. Enhanced Linkage Design & Intelligent Gait Control

PuppyPi features a precision-engineered linkage structure that improves joint transmission efficiency and extends its range of motion. Integrated with an intelligentgait control algorithm, it allows fine-tuned adjustmentof ground contact, lift-off timing, and coordination between front and rear legs. PuppyPi supports multiple gait modes, such as Walk, Trot, and Amble, ensuring smooth transitions and adaptability to various intelligent scenarios.

1) Linkage Mechanism, Efficient Movement

PuppyPi's legs employ a linkage mechanism to increase the angular velocity of its lower legs. The independent movement of different parts of the leg contributes to a greater rotation range.

2) Inverse Kinematics & Linkage-Based Motion

PuppyPi includes intuitive visual motion editing software that let's users define the end-point positions of each leg. Using inverse kinematics, the system automatically calculates the required servo angles, making it easy to create, adjust, and experiment with a variety of custom movements and behaviors.

3) Real-Time Adjustment of Speed, Height & Body Posture

PuppyPi allows dynamic control of walking speed, height, and body orientation while in motion. Pitch and roll angles can be smoothly adjusted on the fly, enabling the robot to walk, turn, and shift posture simultaneously for more natural and flexible movement.

2. AI Vision, Unlimited Creativity

PuppyPi is equipped with a HD wide-angle camera on its head, enabling real-time image acquisition and processing using the OpenCV vision library. It can detect and extract parameters such as the color and position of target objects within its field of view. The system supports a range of vision-based functions, including video streaming, color recognition, tag identification, and visual line following. By applying a PID control algorithm, PuppyPi achieves real-time target locking, enabling advanced AI applications such as target tracking and autonomous ball kicking.

1) Object Tracking

Powered by the OpenCV vision library, PuppyPi can detect and locate objects of a specific color in real-time. Using PID control, its head can actively track moving targets with precision.

2) Going Up and Down Stair

PuppyPi leverages the OpenCV vision library to precisely detect the spatial position and geometric contours of stairs within its view. With autonomous decision-making, it can efficiently and steadily climb stairs on its own.

3) Tag Recognition

Using OpenCV algorithms, PuppyPi can recognize and interpret tag codes within its field of view. It can also calculate each tag's position and orientation, allowing users to program customized interactive movements.

4) Face Detection

PuppyPi features a built-in MediaPipe deep learning algorithm that works with a high-definition camera to accurately detect and lock onto human face. Users can program PuppyPi to perform responsive actions based on facial detection.

5) Autonomous Hurdling

Using the OpenCV vision library, PuppyPi can detect obstacles in real time while following a path, capture their coordinates, and dynamically adjust its posture. It then autonomously makes decisions to smoothly navigate and overcome obstacles.

6) Line Following

With AI vision and PID motion control, PuppyPi can identify colored lines in its view and autonomously adjust its gait to follow the path smoothly and stably.

3. Multimodal Models Deployment

PuppyPi integrates a Multimodal Large AI Model and supports online deployment via OpenAI's API, enabling real-time access to advanced AI capabilities. It also allows seamless switching to alternative models, such as those available through OpenRouter, to support Vision Language Model applications. At its core, PuppyPi is designed as an all-in-one interaction hub built around ChatGPT, enabling sophisticated embodied AI use cases and creating a smooth, intuitive human-machine interaction experience!

1) Large Language Model

With the integration of the ChatGPT Large Model, PuppyPi operates like a "super brain"-capable of

comprehending diverse user commands and responding intelligently and contextually.

2) Large Speech Model

With the integration of the Al voice interaction box, PuppyPi is equipped with speech input and output capabilities-functionally giving it 'ears' and a 'mouth. Utilizing advanced end-to-end speech-language models and natural language processing (NLP) technologies, PuppyPi can perform real-time speech recognition and generate natural, human-like responses, enabling seamless and intuitive voice-based human-machine interaction.

3) Vision Language Model

PuppyPi integrates with OpenRouter's Vision Large Model, enabling advanced image understanding and analysis.It can accurately identify and locate objects within complex visual scenes, while also delivering detailed descriptions that cover object names, characteristics, and other relevant attributes.

4. Large Model Embodied AI Applications

PuppyPi is equipped with a high-performance AI voice interaction module. Unlike conventional AI systems that operate on unidirectional command-response mechanisms, PuppyPi leverages ChatGPT to enable a cognitive transition from semantic understanding to physical execution, significantly enhancing the fluidity and naturalness of human-machine interaction. Combined with machine vision, PuppyPi exhibits advanced capabilities in perception, reasoning, and autonomous action—paving the way for more sophisticated embodied AI applications.

1) Voice Control

Powered by ChatGPT, PuppyPi is capable of semantic understanding and executing corresponding actions, enabling smooth and natural voice control.

2) Scene Understanding

Leveraging OpenAI's ChatGPT model, PuppyPi is capable of understanding user commands and performing semantic analysis of visual scenes within its field of view. It can interpret image content and features, delivering contextual feedback via both text and speech.

3) Ball Tracking and Shooting

With semantic understanding powered by a large language model, PuppyPi can lock onto a target based on commands, adjust its posture in real time, and precisely execute ball tracking and kicking actions.

4) Autonomous Patrolling

Utilizing semantic understanding from a large language model, PuppyPi can accurately detect and track lines of various colors in real time while autonomously navigating obstacles, ensuring smooth and efficient patrolling.

5) Object Transport

Powered by a vision language model, PuppyPi can recognize target objects in its view and intelligently respond to commands by adjusting its posture in real time to pick up and place items with precision.

6) Post Detection

PuppyPi continuously reads IMU sensor data, allowing the AI large model to analyze its posture in real time and make intelligent adjustments based on commands.

7) Smart Assistant

Leveraging the multimodal model deployed on its body, PuppyPi is capable of recognizing and analyzing objects within its field of view. Combined with ChatGPT, it can understand user commands and execute corresponding actions and responses.

8) Temperature Reporting

Equipped with a temperature and humidity sensor, PuppyPi can continuously monitor environmental conditions, gather real-time data, and use semantic understanding powered by a large language model to report the current temperature and humidity.

5. Lidar Functions

The LD19P Lidar utiizes Time-of-Flight (TOF) technology, offering a measurement range of up to 12 meters with a frequency of 4500 readings per second. When integrated with the Robot Operating System (ROS), it enables smooth indoor mapping and navigation capabilities.

1) Lidar Positioning

PuppyPi can be equipped with aTOF Lidar for 360°laser scanningof its surroundings. This allows it to perform advanced SLAM functions, including real-time localization, mapping, path planning, and dynamic obstacle avoidance.

2) Lidar Mapping & Navigation

PuppyPi supports popular mapping algorithms like Gmapping,Hector, and Karto, and enables features such as point-to-point navigation, multi-point navigation, and TEB path planning for enhanced movement and exploration.

3) Multi-Point Navigation, Dynamic Obstacle Avoidance

The TOF Lidar continuously scans the environment and, during multi-point navigation, actively plans the path while avoiding obstacles in real time.

4) Lidar Tracking & Surveillance

PuppyPi utilizes Time-of-Flight (TOF) LiDAR technology to scan its designated area. Upon detecting an object approaching, it automatically orients itself towards the object and issues a warning, Additionally, it can continuously scan and track moving targets, following their movements in real-time.

6. Sensor Expansion for Enhanced Functionality

PuppyPi is equipped with a comprehensive sensor expansion pack, including a temperature and humidity sensor, ultrasonic sensor, touch sensor, and a range of additional electronic modules. This flexible architecture enables seamless integration into diverse AI applications, supports deep learning tasks, and provides a robust foundation for advanced development and creative exploration.

1) Distance Ranging & Obstacle Avoidance

PuppyPi can display information such as the shape and color of recognized objects on the dot matrix screen.

2) Touch Sensing

By touching the metal plate on the touch sensor, PuppyPi can perform corresponding responsive actions.

3) Dot Matrix Display

PuppyPi is capable of recognizing object shapes and colors, and presenting this information on the integrated dot matrix screen.

4) Temperature and Humidity Display

Using the temperature and humidity sensor, PuppyPi can acquire real-time environmental data and display it on the dot matrix display.

7. Support Python Programming & Various Control Methods

1) Python Programming

All intelligent Python code is open source, with detailed annotations for easy self-study.

2) PC Software Control

Through PC software PuppyPi's its height and inclination can also be adjusted to make it turn as walking.

3) App Control

Android and iOS mobile app are available. Via the app, you can remotely control the robot and view what the robot sees.

4) Support Wireless Controller

PuppyPi supports wireless controller control and can connect to the robot via Bluetooth to control the robot in real time.

1* PuppyPi (Raspberry Pi 4B 4G has been assmbled)

1* 8. 4V charger

3* EVA Ball

3* Tag (6.5 *6.5 cm)

1* Card reader

1* Accessary bag

1* WonderEcho Pro AI voice interaction + Type-C cable

1* PS2 wireless controller + receiver

1* Glowing ultrasonic sensor

1* Temperature and humandity sensor

1* Touch sensor

1* Dot matrix display

1* Bracket (for ultrasonic sensor)

1* Sensor accessory bag

1* TOF Lidar

1* Lidar mount

1* 4PIN wire

1* Typc C cable

226*149*190mm

Size: 226*149*190mm

Weight: 720g

Material: Aluminium alloy

Resolution: 480P

DOF: 8DOF

Power: 7.4V 2200mAh Lipo battery

Hardware: Raspberry Pi 4B/5 and Raspberry Pi expansion board

Software: PC software, iOS/ Android APP

Operating system: Ubuntu22.04 LTS+ROS Humble&Ubuntu20.04 LTS+ROS noetic

ROS version: Raspberry Pi 5: ROS1 & ROS2; Raspberry Pi 4B: ROS1

Servo: HPS-0618SG coreless servo

Control method: PC, APP and wireless handle control

Package size: 32*32*17cm

Package weight: About 1.5kg

Customer Reviews

Be the first to write a review
0%
(0)
0%
(0)
0%
(0)
0%
(0)
0%
(0)

Estimate shipping

You may also like