Adaptive task execution and machine learning package.
Project description
ATEM - Adaptive Task Execution Model
Getting Started with ATEM
Welcome to ATEM, the adaptive task execution and machine learning package designed for FTC robotics and beyond. Follow this quick-start guide to get up and running with ATEM in your project.
1. Install ATEM
pip install atem
2. Train a Model in Python
ATEM provides a Python API to train a TensorFlow Lite model based on your tasks.
- Create a tasks.json file to define the tasks your robot will perform. Here’s an example:
{
"tasks": [
{ "name": "Observation Zone", "time": 5, "points": 3 },
{ "name": "Net Zone", "time": 4, "points": 2 },
{ "name": "Low Basket", "time": 5, "points": 4 },
{ "name": "High Basket", "time": 7, "points": 9 }
]
}
- Train a Model Use the following Python script to train and save a TensorFlow Lite model:
from atem.model_train import ModelTrainer
# Paths to tasks file and output model
tasks_file = "tasks.json"
output_model_path = "adaptive_model.tflite"
# Initialize ModelTrainer
trainer = ModelTrainer(tasks_file=tasks_file, output_model_path=output_model_path)
# Train the model and save it
trainer.train_and_save_model(epochs=20, batch_size=16)
print("Task-to-index mappings:")
print(trainer.get_task_mappings())
3. Integrate the Model in Java
ATEM models can be integrated into Java-based FTC projects using TensorFlow Lite for inference.
- Step 1: Add TensorFlow Lite Dependencies
dependencies {
implementation 'org.tensorflow:tensorflow-lite:2.11.0'
}
- Step 2: Include the Trained Model in Your Project Place the adaptive_model.tflite file in the assets directory of your FTC project:
TeamCode/src/main/assets/adaptive_model.tflite
- Step 3: Implement Model Inference Use the following Java code to interpret the trained model:
import org.tensorflow.lite.Interpreter;
import java.nio.ByteBuffer;
import java.nio.file.Files;
import java.nio.file.Paths;
import java.util.Map;
public class ATEMModelInterpreter {
private final Interpreter interpreter;
public ATEMModelInterpreter(String modelPath) throws Exception {
ByteBuffer modelBuffer = ByteBuffer.wrap(Files.readAllBytes(Paths.get(modelPath)));
this.interpreter = new Interpreter(modelBuffer);
}
public String predictNextTask(String currentTask, Map<String, Double> sensorData,
Map<String, Integer> taskToIndex, Map<Integer, String> indexToTask, int maxLength) {
// Implement inference logic here using ATEM-trained model
return "Low Basket"; // Replace with actual implementation
}
}
4. Example Usage in an FTC Project
Use the trained model to predict and execute tasks dynamically.
- Step 1: Initialize the Model Interpreter
ATEMModelInterpreter modelInterpreter = new ATEMModelInterpreter("adaptive_model.tflite");
- Step 2: Use Predictions to Execute Tasks
String nextTask = modelInterpreter.predictNextTask(currentTask, sensorData, taskToIndex, indexToTask, 5);
System.out.println("Predicted Next Task: " + nextTask);
All set!
Features
-
Model Training:
- Train a TensorFlow model to optimize task sequences for maximum points.
- Tasks and their respective points are dynamically loaded from a JSON file.
- Outputs a TensorFlow Lite model for lightweight deployment.
-
Model Interpretation:
- Given a list of tasks, predicts the optimal sequence and total points.
- Outputs human-readable task orders and scores.
How It Works
1. Task JSON File
The tasks.json
file defines the tasks available for the autonomous phase:
{
"tasks": [
{ "name": "High Basket", "points": 10, "time": 5 },
{ "name": "Low Basket", "points": 5, "time": 3 },
"..."
]
}
Training the Model
The model uses task data to train on sequences of tasks for maximizing points within a time limit:
- Loads tasks from the tasks.json file.
- Generates random task sequences within the given time constraint.
- Encodes tasks and trains a model to predict scores based on sequences.
- Outputs a TensorFlow Lite model for deployment.
Interpreting the Model
The interpreter script takes a sequence of tasks, predicts the total points, and outputs the best sequence in human-readable format.
Technical Details
Model Architecture
Input:
-
Task indices (embedded into dense vectors).
-
Task times (numeric values).
-
Hidden Layers:
-
Dense layers for feature extraction and sequence analysis.
-
Output
-
Predicted total points for a given task sequence.
Data Encoding
- Task names are encoded as numerical indices.
- Task times are padded to a fixed length for uniform input.
Adaptive Task Prediction Model
Overview
The Adaptive Task Prediction Model is designed to enable real-time decision-making for autonomous robots. It processes sensor data after each task completion, predicts the next optimal task, and adjusts its strategy based on the robot’s current state and environmental feedback.
This dynamic approach ensures the robot maximizes performance, conserves resources, and adapts to unexpected changes in real-world scenarios.
Workflow
1. Sensor Data Collection
After completing each task, the robot gathers sensor data to provide a snapshot of its current state:
- Time Elapsed: Time taken to complete the task.
- Distance to Target: The robot's proximity to the next goal.
- Gyro Angle: Orientation relative to the reference.
- Battery Level: Remaining energy for task prioritization.
- Additional sensor inputs like vision or LIDAR can be incorporated.
2. Feature Encoding
Sensor data and the current task ID are encoded into a format compatible with the machine learning model:
- Continuous values are normalized for consistent input ranges.
- Categorical values are converted to embeddings or indices.
3. Real-Time Model Inference
The model processes the encoded input to:
- Predict the Next Task:
- Outputs the most likely task to maximize performance.
- Provide Task Scores:
- Confidence levels for all possible tasks.
Example:
Input:
- Current Task: "Observation Zone"
- Sensor Data: {time_elapsed: 20, distance_to_target: 0.5, gyro_angle: 45, battery_level: 70}
Output:
- Predicted Next Task: "High Basket"
- Task Scores: [0.1, 0.8, 0.1]
Model Inferencing
The Adaptive Task Prediction Model utilizes a TensorFlow Lite (TFLite) model for efficient inference. This lightweight, optimized model is specifically designed for resource-constrained environments like robotics systems, ensuring fast and accurate predictions in real time.
The model requires encoded inputs representing:
- Current Task: Encoded as a numerical ID using the task_to_index mapping.
- sensor Data: Real-time inputs such as:
- time_elapsed: Normalized elapsed time.
- distance_to_target: Scaled distance to the next target.
- gyro_angle: Angle, normalized to a fixed range.
- battery_level: Percentage value normalized between 0 and 1.
*The inputs are padded to match the model’s expected dimensions if needed.
Once the input data is prepared, it is passed into the TFLite interpreter:
- The interpreter runs the input through the pre-trained model.
- The output includes:
- Predicted Task Scores: Confidence scores for each possible task.
- Selected Task: The task with the highest score.
How the AI Adapts in Real-Time
- After completing a task, the robot feeds its current state (task + sensor data) into the model.
- The AI processes the input and:
- Predicts the next task to perform.
- Scores all potential tasks to indicate confidence levels.
- The robot executes the predicted task with the highest score.
Features
- Adaptive Task Model: Predict the next task based on sensor data and task history.
- Task Training: Train custom machine learning models using a
tasks.json
file. - Real-time Adaptation: Simulate real-world scenarios for task execution.
- Pathfinding Integration: Extendable for integration with A* pathfinding for robotics.
- Lightweight TensorFlow Lite Integration: For efficient model inference.
Installation
pip install atem
Development Setup
API Reference
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file atem-1.2.7.tar.gz
.
File metadata
- Download URL: atem-1.2.7.tar.gz
- Upload date:
- Size: 45.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.12.3
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 14caeb8055b48e10669f56c2fd220e353fa17bfbc6de3ddad08933bd89f4ae22 |
|
MD5 | 8c2f695a87766fde3b3064e341eb0c7a |
|
BLAKE2b-256 | c5c4b5fd2bee3762208b440dd1f330591ef22099e5d0ea521e4979c648208388 |
File details
Details for the file atem-1.2.7-py3-none-any.whl
.
File metadata
- Download URL: atem-1.2.7-py3-none-any.whl
- Upload date:
- Size: 10.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.12.3
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 255475dcd645e5a3cb5dc965e21ba7e2f4e84273bb5dfb607e0481e7df98815b |
|
MD5 | b6c08ed75b95bf5de12c74c852b8b95e |
|
BLAKE2b-256 | d5d96efcc09895923e1649d56c091e59144d88c61ddfc92baa0c712cc9f6df2e |