Robot Environment - Troubleshooting¶
Common issues and solutions for the Robot Environment system.
Table of Contents¶
- Object Detection Problems
- Robot Movement Issues
- Hardware Problems
- Frequently Asked Questions (FAQ)
- Getting Help
Frequently Asked Questions (FAQ)¶
1. Why do I get a ModuleNotFoundError: No module named 'text2speech.engines'?¶
This is a known issue in the text2speech package distribution where sub-packages like engines are not correctly included when installed as a regular package.
Solution:
The best way to fix this is to install the text2speech package in editable mode from its source directory:
If you are the maintainer of the text2speech repository, ensure the pyproject.toml correctly includes all sub-packages:
2. The robot moves to the wrong place or misses the object. What should I do?¶
First, check if the workspace is correctly calibrated. Use env.get_workspace_by_id("your_ws_id").get_bounds() to see the world coordinates the system is using.
Second, ensure you are using fresh detections. Always move to an observation pose and wait a second before calling get_detected_objects().
3. How do I switch between simulation and real robot?¶
When initializing the Environment class, set use_simulation=True for Gazebo and use_simulation=False for the real hardware.
Note that the real Niryo robot requires a specific IP address (default is 192.168.0.140).
4. Can I use this without a GPU?¶
Yes! While object detection is faster on a GPU, models like yolo-world or yoloe-11s can run on a standard CPU with reasonable performance for pick-and-place tasks.
5. Why is the camera feed delayed?¶
The camera feed is streamed via Redis. If you notice a lag, it might be due to network congestion (if using a real robot over Wi-Fi) or high CPU usage by the vision models. Try a lighter model or reduce the camera update frequency in the configuration.
Object Detection Problems¶
No Objects Detected¶
Symptoms:
- get_detected_objects() returns empty list
- Camera shows black screen
- "No objects detected" messages
Solutions:
-
Verify camera is working:
-
Check Redis is running:
-
Verify camera thread is started:
-
Check lighting conditions:
- Ensure workspace is well-lit
- Avoid shadows and glare
-
Use consistent lighting
-
Verify object labels:
Objects Detected at Wrong Positions¶
Symptoms: - Robot misses objects when picking - Coordinates don't match visual position - Objects appear shifted in camera view
Solutions:
-
Recalibrate camera transformation:
-
Check workspace is level:
- Ensure robot base is stable
- Workspace surface should be flat
-
Check for tilting or movement
-
Update detection immediately before pick:
-
Verify coordinate system understanding:
Detection is Too Slow¶
Symptoms: - Long delays before robot responds - Camera updates lag behind - Low FPS (< 1 frame/second)
Solutions:
-
Use faster detection model:
-
Reduce camera update rate:
-
Check GPU availability:
-
Optimize detection parameters:
Robot Movement Issues¶
Robot Won't Move¶
Symptoms: - Commands accepted but no movement - Robot stays in same position - "Movement failed" errors
Solutions:
-
Check robot connection:
-
Verify simulation vs. real mode:
-
Check robot power and calibration:
- Ensure robot is powered on
- Run calibration routine if needed
-
Check for error LEDs on robot
-
Verify coordinates are reachable:
# Check workspace bounds upper_left = get_workspace_coordinate_from_point("niryo_ws", "upper left corner") lower_right = get_workspace_coordinate_from_point("niryo_ws", "lower right corner") print(f"Valid X range: [{lower_right[0]}, {upper_left[0]}]") print(f"Valid Y range: [{lower_right[1]}, {upper_left[1]}]") # Niryo: X=[0.163, 0.337], Y=[-0.087, 0.087]
Collision Detection Triggered¶
Symptoms: - Robot stops suddenly - "Collision detected" messages - Robot needs reset before continuing
Solutions:
-
Clear collision flag:
-
Check workspace for obstacles:
- Remove objects outside workspace
- Ensure cables aren't blocking movement
-
Check gripper clearance
-
Adjust movement parameters:
-
Move to safe observation pose:
Gripper Problems¶
Symptoms: - Objects slip out of gripper - Gripper doesn't close/open - "Failed to grasp" errors
Solutions:
-
Check object size:
-
Verify gripper calibration:
-
Check object graspability:
- Objects should have flat surfaces
- Avoid round or irregular shapes
-
Ensure objects aren't too heavy (< 500g)
-
Adjust grasp approach angle:
Hardware Problems¶
Niryo Robot Specific¶
Issue: Robot not responding
Issue: Calibration needed
Issue: Learning mode activated - Manually disable learning mode on robot - Robot will be stiff when learning mode is off
WidowX Robot Specific¶
Issue: Joint limits
Issue: Power supply - Ensure adequate power (12V) - Check for voltage drops during operation
Camera Issues¶
Issue: Poor image quality
# Adjust camera settings
camera.set(cv2.CAP_PROP_EXPOSURE, -7)
camera.set(cv2.CAP_PROP_BRIGHTNESS, 130)
Issue: Wrong camera selected
# List available cameras
for i in range(4):
cap = cv2.VideoCapture(i)
if cap.isOpened():
print(f"Camera {i} available")
cap.release()
Getting Help¶
Resources¶
- GitHub Issues: https://github.com/dgaida/robot_environment/issues
- Documentation: README.md
Quick Diagnostic Checklist¶
Before opening an issue, check:
- [ ] Redis is running
- [ ] Robot is powered on (if using real robot)
- [ ] Camera is working (check Redis stream)
- [ ] Object detection is running (check for detections)
- [ ] Coordinates are within workspace bounds
- [ ] Object names match detected labels exactly
- [ ] All dependencies are installed
- [ ] Log files checked for errors
If all checked and still having issues, please open a GitHub issue with the information above!