Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added new md files for course3.4/5/6 #63

Closed
wants to merge 1 commit into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
231 changes: 231 additions & 0 deletions docs/courses/robotics/3.4-mobile-robotics.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,231 @@

(3.4-orientation)=
# 🧩 3.4 Mobile Robotics

```{contents}
:depth: 3
```

## 🔰 Tutorial

In this module, you will develop software to:
1. Demonstrate control of a mobile cobot using frameworks such as the Robot Operating System (**ROS**) and **Isaac Sim**
2. Use a workflow orchestration package to manage asynchronous tasks
3. Define asynchrony in the context of hardware control for autonomous laboratories

### ROS (Robot Operating System)

First, you will learn how to use **ROS** to control a mobile cobot. **ROS** is an open-source framework widely used for building robot applications. **ROS** provides tools and libraries to help design, simulate, and control robots, making it a crucial component in modern robotics.

#### Bill of Materials

#### Bill of Materials

- [MyCobot Pi - World's Smallest and Lightest Six-Axis Collaborative Robot](https://shop.elephantrobotics.com/en-ca/collections/mycobot-280/products/mycobot-pi-worlds-smallest-and-lightest-six-axis-collaborative-robot)
A versatile and compact six-axis robot, ideal for mobile robotics applications.

- [Camera Flange 2.0](https://shop.elephantrobotics.com/en-ca/collections/camera-modules/products/camera-flange-2-0)
Used for vision-based tasks in mobile robotics, such as object recognition and navigation.

- [Adaptive Gripper](https://shop.elephantrobotics.com/en-ca/collections/grippers/products/adaptive-gripper)
A flexible gripper designed for precise manipulation and picking tasks in collaborative robotic systems.

- [G-Shape Base 2.0](https://shop.elephantrobotics.com/en-ca/collections/fixed-bases/products/g-shape-base-2-0)
Provides a sturdy mounting platform for the MyCobot, ensuring stability during robotic operations.

- [Prefect](https://www.prefect.io/)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Software separate from BoM

A workflow orchestration tool used to manage and coordinate asynchronous tasks in the system.

- [AprilTags Python Library](https://pypi.org/project/apriltag/)
A computer vision library for identifying and tracking AprilTags, used for spatial referencing and navigation.

- [ROS Noetic Ninjemys](http://wiki.ros.org/noetic/Installation) (for Ubuntu 20.04)
The primary framework for controlling the robot, providing a robust platform for robotic system integration and task execution.

- [ROS 2 Foxy Fitzroy](https://docs.ros.org/en/foxy/Installation.html) (for newer ROS 2 applications)
A newer version of the ROS framework used for advanced robotics applications.

- [Raspberry Pi 4 Model B](https://www.raspberrypi.org/products/raspberry-pi-4-model-b/)
A powerful microcontroller used for controlling robots and running robotics software like ROS.

- [LIDAR sensor for obstacle detection](https://ca.robotshop.com/search?type=product&options%5Bprefix%5D=last&options%5Bunavailable_products%5D=last&q=LIDAR+sensor&_gl=1*1s7g9lg*_up*MQ..&gclid=CjwKCAjwl6-3BhBWEiwApN6_krjEcVjIxN0V3ncgGcH6lrUhTDkY-0Ym4QQjyH3-0jvSYBfa903TbxoCXyEQAvD_BwE)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Built-in to MyAGV

Used for real-time obstacle detection and mapping in mobile robotics.

- [TurtleBot3 Burger](https://emanual.robotis.com/docs/en/platform/turtlebot3/overview/)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A small mobile robot used for learning and prototyping robotics applications with ROS.

- [Raspberry Pi Camera Module](https://www.raspberrypi.org/products/camera-module-v2/)
A camera module used for vision-based tasks such as object tracking and navigation.

- USB-A to micro USB-B cable:
Used to connect and power devices such as the Raspberry Pi or peripherals.

- SD Card with Raspbian OS:
Pre-loaded with the Raspbian OS for use with the Raspberry Pi to facilitate software installations and configurations.

#### Documentation

- [MyCobot Pi Documentation](https://docs.elephantrobotics.com/docs/gitbook-en/2-serialproduct/2.1-280/2.1.2-PI.html)
Detailed guide on setting up and operating the MyCobot Pi.

- [Gripper Control via Python](https://docs.elephantrobotics.com/docs/gitbook-en/7-ApplicationBasePython/7.5_gripper.html)
Guide for controlling the adaptive gripper using Python commands.

- [TurtleBot3 Documentation](https://emanual.robotis.com/docs/en/platform/turtlebot3/overview/)
Official documentation for the TurtleBot3, providing details on setup, usage, and applications.

### Notes
These materials provide a comprehensive setup for controlling a mobile cobot, including vision systems, robotic arms, grippers, and obstacle detection sensors. The setup integrates ROS and workflow orchestration using Prefect, enabling asynchronous task execution and complex robot control in various environments, such as autonomous labs or educational settings.


#### Demo

✅ Read the [ROS Noetic Documentation](http://wiki.ros.org/noetic/)

✅ Watch [Getting Started with ROS](https://www.youtube.com/watch?v=ehtUb55Rmmg&list=PLk51HrKSBQ8-jTgD0qgRp1vmQeVSJ5SQC)

✅ Learn about controlling robots with ROS [ROS Navigation Stack](http://wiki.ros.org/navigation)

You will implement a more complex movement pattern, such as moving in a square path with the TurtleBot3. Additionally, the robot will use LIDAR to avoid obstacles and adjust its motion accordingly.

```bash
# Launch the TurtleBot3 simulation in ROS
roslaunch turtlebot3_gazebo turtlebot3_world.launch

# Move the robot in a square pattern with obstacle avoidance
rosrun turtlesim turtlesim_node
rostopic pub /cmd_vel geometry_msgs/Twist "linear:
x: 0.2
y: 0.0
z: 0.0
angular:
x: 0.0
y: 0.0
z: 1.57" -r 1 # 90-degree turn

# Obstacle avoidance using LIDAR data
rostopic pub /scan geometry_msgs/LaserScan "header:
seq: 0
ranges: [0.0, 0.0, ..., 0.0]" # Replace with actual LIDAR sensor data

# Adjust speed dynamically based on LIDAR input
rostopic pub /cmd_vel geometry_msgs/Twist "linear:
x: $(rostopic echo /scan/ranges[0] < 1.0 ? 0.1 : 0.5)" -r 10
```

In this code:
1. The robot moves in a square pattern by combining linear and angular velocities.
2. Obstacle avoidance is managed by checking the LIDAR sensor data (`/scan`) and adjusting speed.
3. The speed dynamically changes based on the proximity of obstacles detected by the LIDAR sensor.

---

### Isaac Sim

Next, we will enhance the integration of **Isaac Sim** with **ROS** to simulate and control more complex robotics applications. Isaac Sim allows you to create realistic simulations with physics-based interactions, which are essential for testing robotic behaviors before deploying them on real hardware.

#### Updated Demo

✅ Watch [Getting Started with Isaac Sim](https://www.youtube.com/watch?v=3pWwkuc2Ecw&pp=ygUeR2V0dGluZyBTdGFydGVkIHdpdGggSXNhYWMgU2lt)

✅ Read [Isaac Sim and ROS Integration](https://docs.nvidia.com/isaac/isaac_ros/ros.html)

We will simulate a TurtleBot3 moving in an environment with obstacles and using Isaac Sim's advanced features like object detection, visual navigation, and path planning. Here’s a more detailed example that involves these capabilities:

```bash
# Launch Isaac Sim with ROS integration
roslaunch isaac_sim turtlebot3_integration.launch

# Set up a more complex environment in Isaac Sim
roslaunch turtlebot3_gazebo turtlebot3_house.launch

# Advanced simulation: TurtleBot3 navigates with obstacle avoidance
rostopic pub /cmd_vel geometry_msgs/Twist "linear:
x: 0.5
y: 0.0
z: 0.0
angular:
x: 0.0
y: 0.0
z: 0.5" -r 10

# Detect objects in the simulation using Isaac Sim and ROS bridge
rosrun object_detection_node object_detection

# Use navigation stack for path planning
rosrun move_base move_base
```

**Additional Features in Isaac Sim**:
- **Object Detection**: Integrate Isaac Sim’s built-in computer vision models for detecting objects (e.g., cups, boxes) within the simulated environment.
- **Path Planning**: Use ROS's move_base package for autonomous navigation, allowing the robot to calculate paths around obstacles in real-time.
- **Visual SLAM**: Use visual markers and the robot's onboard camera for Simultaneous Localization and Mapping (SLAM).

---

### Asynchronous Task Execution

In this section, we will further develop the use of **Prefect** for asynchronous task execution in robotics. We will demonstrate how to handle multiple asynchronous tasks, such as moving a robot, capturing sensor data, and running real-time analysis concurrently.

#### Updated Demo

✅ Learn [Asynchronous Task Execution with Prefect](https://docs.prefect.io/core/concepts/tasks.html)

In this enhanced example, we will manage multiple asynchronous tasks, such as controlling the robot's movement while simultaneously monitoring sensors and running a real-time analysis of LIDAR data.

```python
from prefect import task, Flow
import time
import random

@task
def control_robot():
for _ in range(10):
print("Moving the robot forward...")
time.sleep(1)

@task
def monitor_sensors():
for _ in range(10):
sensor_value = random.uniform(0.0, 2.0) # Simulating sensor data
print(f"LIDAR sensor reading: {sensor_value}")
time.sleep(1)

@task
def analyze_data():
for _ in range(10):
print("Analyzing real-time data...")
time.sleep(1)

with Flow("Asynchronous Robotics Control") as flow:
control_robot()
monitor_sensors()
analyze_data()

flow.run()
```

**New Features**:
- **Asynchronous Control**: The robot moves while the sensors are being monitored.
- **Real-Time Analysis**: Sensor data is being analyzed while the robot is moving. This could include LIDAR data processing, obstacle detection, or real-time mapping.
- **Parallel Task Execution**: Prefect ensures that all tasks (movement, monitoring, analysis) run concurrently, simulating a real-world scenario where robots need to process multiple tasks simultaneously.

---

### Define Asynchrony in the Context of Hardware Control

Asynchrony in robotics refers to executing tasks independently and concurrently. In the context of hardware control, this is critical for managing complex processes in an autonomous laboratory, where sensors must continually gather data, robots must execute movement commands, and the system needs to react in real-time.

For instance, while a robot is moving, it must be able to continuously monitor its surroundings (e.g., using LIDAR or cameras) and adjust its trajectory without pausing or waiting for other tasks to complete.

---

### 📄 Assignment

For this assignment, you'll develop a more complex control system for a mobile robot using **ROS** and **Isaac Sim**. You’ll implement the following:

1. **Control the TurtleBot3**: Create a ROS node that moves the robot in a complex path (e.g., figure-eight pattern) while avoiding obstacles using LIDAR.
2. **Simulate and Test in Isaac Sim**: Deploy the robot in Isaac Sim’s photorealistic environment and simulate tasks like object detection and path planning.
3. **Asynchronous Task Management**: Use **Prefect** to orchestrate tasks like motion control, sensor data monitoring, and real-time analysis. Ensure that these tasks run concurrently.
4. **Define Asynchronous Execution**: Provide an explanation of how asynchrony is essential for controlling autonomous robots in a laboratory setting.

157 changes: 157 additions & 0 deletions docs/courses/robotics/3.5-computer-vision.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,157 @@

(3.5-orientation)=
# 🧩 3.5 Computer Vision

```{contents}
:depth: 3
```

## 🔰 Tutorial

In this module, you will develop software to:
1. Demonstrate spatial referencing and ID lookup by using **OpenCV** and **AprilTags**
2. Use a motorized microscope and **OpenCV** to search for regions of interest (ROI) in a sample

### OpenCV

First, you will utilize **OpenCV**, a powerful computer vision library, to preprocess images, detect objects, and perform real-time computer vision tasks. **OpenCV** is widely used for various computer vision applications, including image filtering, object tracking, and edge detection.

#### Bill of Materials

- [OpenCV](https://pypi.org/project/opencv-python/) (for Python installation: `pip install opencv-python`)
- [AprilTag Python Library](https://pypi.org/project/apriltag/)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Separate hardware and software

- [Motorized Microscope]
Copy link
Member

@sgbaird sgbaird Sep 23, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Microscope will be related to OpenCV, but not AprilTags (separate topics/tutorials)

- [Raspberry Pi Camera Module](https://www.raspberrypi.org/products/camera-module-v2/)
- [USB-A to micro USB-B cable]

#### Demo

✅ Read the [OpenCV Documentation](https://docs.opencv.org/4.x/)

✅ Watch [Introduction to OpenCV](https://www.youtube.com/watch?v=oXlwWbU8l2o)

In this task, you will use **OpenCV** to perform image preprocessing, such as applying a Gaussian blur, detecting edges, and identifying regions of interest (ROI).

```python
import cv2
import numpy as np

# Load an image
img = cv2.imread('sample_image.jpg')

# Apply Gaussian Blur
blurred = cv2.GaussianBlur(img, (5, 5), 0)

# Edge Detection
edges = cv2.Canny(blurred, 100, 200)

cv2.imshow('Edges', edges)
cv2.waitKey(0)
cv2.destroyAllWindows()
```

The above code will help you preprocess the image to smoothen it and then detect edges. This is the first step before applying any region of interest search.

---

### AprilTags

Next, you will utilize **AprilTags**, a popular fiducial marker system for reliable and robust spatial referencing and object identification. AprilTags are often used in robotics for locating objects and navigating environments.

#### Bill of Materials

- [AprilTags Python Library](https://pypi.org/project/apriltag/)
- [AprilTags Documentation](https://april.eecs.umich.edu/software/apriltag.html)
- [Raspberry Pi Camera Module](https://www.raspberrypi.org/products/camera-module-v2/)
- [Motorized Microscope]
#### Demo

✅ Watch [Vision Programming with AprilTags](https://youtu.be/TG9KAa2EGzQ)

The following code demonstrates how to use **AprilTags** for spatial referencing by detecting tags and performing ID lookup.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would be good to have something demonstrating spatial orientation/locating based on the AprilTag.


```python
import apriltag
import cv2

# Load an image with AprilTags
img = cv2.imread('apriltag_image.jpg')
gray_image = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

# Initialize AprilTag detector
detector = apriltag.Detector()

# Detect tags
tags = detector.detect(gray_image)

# Display tag information
for tag in tags:
print(f"Detected tag ID: {tag.tag_id}")
cv2.rectangle(img,
(int(tag.corners[0][0]), int(tag.corners[0][1])),
(int(tag.corners[2][0]), int(tag.corners[2][1])),
(0, 255, 0), 2)

cv2.imshow('AprilTag Detected', img)
cv2.waitKey(0)
cv2.destroyAllWindows()
```

In this demo, you will detect **AprilTags** in an image and draw bounding boxes around them. You can then use the tag ID for further referencing tasks.

---

### Motorized Microscope and OpenCV

After detecting regions of interest (ROI) using **OpenCV** and **AprilTags**, you will integrate a motorized microscope. The microscope will move automatically based on the ROI found in the image, providing fine control and automation for analyzing samples.

#### Demo

✅ Watch [Controlling a Motorized Microscope](https://youtu.be/fHZSsAKThW4)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Replace with content related to OpenFlexure Microscope v7, if possible

https://openflexure.org/


In this task, you will automate the movement of a motorized microscope using **OpenCV**-detected ROIs. Below is a conceptual pseudo-code for moving the microscope:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.


```python
def move_microscope_to_roi(roi):
# Command to move the motorized microscope to ROI based on coordinates
microscope.move_to(roi['x'], roi['y'])

# Sample ROI detected using OpenCV (this would be dynamically detected)
roi = {'x': 150, 'y': 200}

# Move the microscope to the detected ROI
move_microscope_to_roi(roi)
```

You can integrate the motorized microscope with your **OpenCV** detection pipeline, enabling real-time analysis and precise control.

---

## 🚀 Quiz

::::{tab-set}
:sync-group: category

:::{tab-item} Sp/Su 2024
:sync: sp2024

[Quiz URL]
:::

::::

---

## 📄 Assignment

Create a script that uses **OpenCV** and **AprilTags** to detect regions of interest in a sample and control a motorized microscope to move to those regions.
Copy link
Member

@sgbaird sgbaird Sep 23, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This could be for:

  1. imaging particles (particle size distribution) (e.g., Maybe something with https://imagej.net/imaging/particle-analysis controlled via https://imagej.net/scripting/pyimagej)
  2. searching for an object/item within a region (blank elsewhere)
  3. Grain boundary distribution for an etched metal sample
  4. A biological sample (and some related task)

AccelerationConsortium/ac-training-lab#28


✅ First, preprocess the sample image using **OpenCV** (apply filters and detect edges).
✅ Then, detect and extract **AprilTags** for spatial referencing.
✅ Finally, automate the movement of the motorized microscope to the detected regions of interest.

Example tasks:
1. Write a script to process an image and identify ROIs using **OpenCV**.
2. Detect **AprilTags** and use the tag ID for tracking and referencing.
Copy link
Member

@sgbaird sgbaird Sep 23, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Create a separate sub-assignment focused on AprilTags

3. Integrate a motorized microscope to search and focus on regions of interest.

Loading
Loading