Skip to content

Robot Environment - Troubleshooting

Common issues and solutions for the Robot Environment system.

Table of Contents


Frequently Asked Questions (FAQ)

1. Why do I get a ModuleNotFoundError: No module named 'text2speech.engines'?

This is a known issue in the text2speech package distribution where sub-packages like engines are not correctly included when installed as a regular package.

Solution: The best way to fix this is to install the text2speech package in editable mode from its source directory:

cd /path/to/text2speech/repository
pip install -e .

If you are the maintainer of the text2speech repository, ensure the pyproject.toml correctly includes all sub-packages:

[tool.setuptools.packages.find]
where = ["."]
include = ["text2speech*"]

2. The robot moves to the wrong place or misses the object. What should I do?

First, check if the workspace is correctly calibrated. Use env.get_workspace_by_id("your_ws_id").get_bounds() to see the world coordinates the system is using. Second, ensure you are using fresh detections. Always move to an observation pose and wait a second before calling get_detected_objects().

3. How do I switch between simulation and real robot?

When initializing the Environment class, set use_simulation=True for Gazebo and use_simulation=False for the real hardware. Note that the real Niryo robot requires a specific IP address (default is 192.168.0.140).

4. Can I use this without a GPU?

Yes! While object detection is faster on a GPU, models like yolo-world or yoloe-11s can run on a standard CPU with reasonable performance for pick-and-place tasks.

5. Why is the camera feed delayed?

The camera feed is streamed via Redis. If you notice a lag, it might be due to network congestion (if using a real robot over Wi-Fi) or high CPU usage by the vision models. Try a lighter model or reduce the camera update frequency in the configuration.


Object Detection Problems

No Objects Detected

Symptoms: - get_detected_objects() returns empty list - Camera shows black screen - "No objects detected" messages

Solutions:

  1. Verify camera is working:

    from redis_robot_comm import RedisImageStreamer
    streamer = RedisImageStreamer(stream_name="robot_camera")
    img, metadata = streamer.get_latest_image()
    print(f"Image shape: {img.shape}")  # Should be (480, 640, 3)
    

  2. Check Redis is running:

    docker ps | grep redis
    
    # If not running:
    docker run -p 6379:6379 redis:alpine
    

  3. Verify camera thread is started:

    # In server initialization
    env = Environment(
        ...
        start_camera_thread=True  # Must be True!
    )
    

  4. Check lighting conditions:

  5. Ensure workspace is well-lit
  6. Avoid shadows and glare
  7. Use consistent lighting

  8. Verify object labels:

    labels = env.get_object_labels_as_string()
    print(f"Recognizable objects: {labels}")
    
    # Add custom labels if needed
    env.add_object_name2object_labels("your_object")
    


Objects Detected at Wrong Positions

Symptoms: - Robot misses objects when picking - Coordinates don't match visual position - Objects appear shifted in camera view

Solutions:

  1. Recalibrate camera transformation:

    # Check workspace calibration
    workspace = env.get_workspace_by_id("niryo_ws")
    print(f"Workspace bounds: {workspace.get_bounds()}")
    
    # Verify transformation parameters
    # May need to recalibrate camera-to-world transform
    

  2. Check workspace is level:

  3. Ensure robot base is stable
  4. Workspace surface should be flat
  5. Check for tilting or movement

  6. Update detection immediately before pick:

    # ✅ Good: Fresh detection
    objects = get_detected_objects()
    obj = objects[0]
    pick_object(obj['label'], [obj['x'], obj['y']])
    
    # ❌ Bad: Stale coordinates
    pick_object("pencil", [0.15, -0.05])  # May have moved!
    

  7. Verify coordinate system understanding:

    Niryo workspace (top view):
        Y-axis →
        ┌─────────┐
        │         │
    X ↓ │ Center  │
        │  (0,0)  │
        │         │
        └─────────┘
    


Detection is Too Slow

Symptoms: - Long delays before robot responds - Camera updates lag behind - Low FPS (< 1 frame/second)

Solutions:

  1. Use faster detection model:

    # In Environment initialization
    visual_cortex = VisualCortex(
        objdetect_model_id="yoloworld",  # Faster than owlv2
        device="cuda"  # Use GPU if available
    )
    

  2. Reduce camera update rate:

    # In camera thread
    time.sleep(0.5)  # Update every 0.5s instead of 0.1s
    

  3. Check GPU availability:

    import torch
    print(f"CUDA available: {torch.cuda.is_available()}")
    print(f"GPU: {torch.cuda.get_device_name(0)}")
    
    # If no GPU:
    # - Use CPU with yoloworld model
    # - Or add GPU to system
    

  4. Optimize detection parameters:

    config = {
        'confidence_threshold': 0.20,  # Higher = fewer false positives
        'iou_threshold': 0.5,
        'max_detections': 50  # Lower = faster
    }
    


Robot Movement Issues

Robot Won't Move

Symptoms: - Commands accepted but no movement - Robot stays in same position - "Movement failed" errors

Solutions:

  1. Check robot connection:

    # For Niryo
    robot = env.robot()
    status = robot.robot_ctrl().get_hardware_status()
    print(f"Robot connected: {status}")
    

  2. Verify simulation vs. real mode:

    # TODO
    
    # --no-simulation flag for real robot
    # Without flag = simulation mode
    

  3. Check robot power and calibration:

  4. Ensure robot is powered on
  5. Run calibration routine if needed
  6. Check for error LEDs on robot

  7. Verify coordinates are reachable:

    # Check workspace bounds
    upper_left = get_workspace_coordinate_from_point("niryo_ws", "upper left corner")
    lower_right = get_workspace_coordinate_from_point("niryo_ws", "lower right corner")
    
    print(f"Valid X range: [{lower_right[0]}, {upper_left[0]}]")
    print(f"Valid Y range: [{lower_right[1]}, {upper_left[1]}]")
    
    # Niryo: X=[0.163, 0.337], Y=[-0.087, 0.087]
    


Collision Detection Triggered

Symptoms: - Robot stops suddenly - "Collision detected" messages - Robot needs reset before continuing

Solutions:

  1. Clear collision flag:

    clear_collision_detected()
    

  2. Check workspace for obstacles:

  3. Remove objects outside workspace
  4. Ensure cables aren't blocking movement
  5. Check gripper clearance

  6. Adjust movement parameters:

    # In robot controller (if accessible)
    robot_ctrl.set_collision_threshold(higher_value)
    

  7. Move to safe observation pose:

    move2observation_pose("niryo_ws")
    clear_collision_detected()
    


Gripper Problems

Symptoms: - Objects slip out of gripper - Gripper doesn't close/open - "Failed to grasp" errors

Solutions:

  1. Check object size:

    obj = get_detected_object([x, y])
    if obj['width_m'] > 0.05:
        print("Object too large for gripper!")
        # Use push_object() instead
    

  2. Verify gripper calibration:

    # Test gripper
    robot.robot_ctrl().open_gripper()
    time.sleep(2)
    robot.robot_ctrl().close_gripper()
    

  3. Check object graspability:

  4. Objects should have flat surfaces
  5. Avoid round or irregular shapes
  6. Ensure objects aren't too heavy (< 500g)

  7. Adjust grasp approach angle:

    # Object rotation affects grasp success
    obj = get_detected_object([x, y])
    print(f"Object rotation: {obj['rotation_rad']} rad")
    
    # Robot adjusts approach automatically
    


Hardware Problems

Niryo Robot Specific

Issue: Robot not responding

# Check Niryo connection
ping <robot_ip>

# Default: 192.168.1.xxx

Issue: Calibration needed

# Run calibration
robot.robot_ctrl().calibrate()

Issue: Learning mode activated - Manually disable learning mode on robot - Robot will be stiff when learning mode is off


WidowX Robot Specific

Issue: Joint limits

# WidowX has different workspace
# Adjust coordinates accordingly

Issue: Power supply - Ensure adequate power (12V) - Check for voltage drops during operation


Camera Issues

Issue: Poor image quality

# Adjust camera settings
camera.set(cv2.CAP_PROP_EXPOSURE, -7)
camera.set(cv2.CAP_PROP_BRIGHTNESS, 130)

Issue: Wrong camera selected

# List available cameras
for i in range(4):
    cap = cv2.VideoCapture(i)
    if cap.isOpened():
        print(f"Camera {i} available")
    cap.release()


Getting Help

Resources

  • GitHub Issues: https://github.com/dgaida/robot_environment/issues
  • Documentation: README.md

Quick Diagnostic Checklist

Before opening an issue, check:

  • [ ] Redis is running
  • [ ] Robot is powered on (if using real robot)
  • [ ] Camera is working (check Redis stream)
  • [ ] Object detection is running (check for detections)
  • [ ] Coordinates are within workspace bounds
  • [ ] Object names match detected labels exactly
  • [ ] All dependencies are installed
  • [ ] Log files checked for errors

If all checked and still having issues, please open a GitHub issue with the information above!