ASAP CLI Tool
Project description
๐ ASAP CLI - AWS Migration Assistant
ASAP (AWS SQL Analytics Platform) is an AI-powered CLI tool designed to accelerate SQL and ETL migrations to AWS. Built by the AWS ProServ Data & Analytics Latam Team, it provides expert-level migration assistance with interactive guidance and validation.
๐ฏ Overview
ASAP CLI leverages advanced AI agents to:
- Migrate SQL/ETL from legacy platforms (Oracle, Teradata, SQL Server) to AWS services
- Optimize queries for performance and cost efficiency
- Assess migration complexity and risks
- Generate implementation roadmaps with step-by-step guidance
- Suggest AWS-native architectures using Glue, EMR, Redshift, and Athena
- Validate data quality and provide testing strategies
โจ Key Features
๐ SQL/ETL Migration
- Multi-platform support: Oracle, Teradata, SQL Server โ AWS
- Target platforms: Amazon Redshift, Spark SQL, AWS Glue
- Interactive migration process with feedback loops
- Semantic preservation of business logic and data types
โก Query & Transformation Optimization
- Performance tuning recommendations
- Cost optimization strategies
- AWS-specific optimizations (DISTKEY, SORTKEY, partitioning)
๐ฏ Expert Migration Assessment
- Complexity and risk analysis
- Compatibility issue identification
- Business rules validation
- Migration readiness evaluation
๐ Implementation Planning
- Detailed migration roadmaps
- Step-by-step execution plans
- Field mapping documentation
- Test case generation
๐๏ธ AWS Architecture Guidance
- Service recommendations (Glue, EMR, Redshift, Athena)
- Best practices implementation
- Performance optimization
- Security considerations
๐ ๏ธ Debug & Validation Tools
- SQL debugging and correction
- Data quality validation
- Test result comparison
- Automated fixing with explanations
๐ Quick Start
Prerequisites
- Python 3.10+
- AWS CLI configured with appropriate permissions
- AWS Bedrock access (Claude models)
- Git for version control
Installation
-
Clone the repository:
git clone git@ssh.gitlab.aws.dev:jorsie/asap-cli.git cd asap-cli
-
Install dependencies:
pip install -r requirements.txt
-
Install the CLI tool:
pip install -e .
-
Set up aws account:
- Go to your aws account.
- Create user with access granted to aws bedrock.
- Configure your acces key and secret acces key for aws cli.
- Activate your aws cli profile.
- Go to aws bedrock and ensure access to Claude 3.7 Sonnet and Claude Sonnet 4
Altenative Installation
-
pip install asap-cli
-
Set up aws account:
- Go to your aws account.
- Create user with access granted to aws bedrock.
- Configure your acces key and secret acces key for aws cli.
- Activate your aws cli profile.
- Go to aws bedrock and ensure access to Claude 3.7 Sonnet and Claude Sonnet 4
Usage
Start the interactive migration assistant:
asap agent run
The CLI will guide you through:
- Pipeline identification - Name your migration pipeline
- Source platform - Specify your current SQL dialect
- Target platform - Choose your AWS destination
- Migration execution - Follow interactive prompts
- Validation & debugging - Test and refine results
๐ Usage Examples
Example 1: SQL Server to Redshift Migration
asap agent run
โโโ migration/ # Migration workspace
โ โโโ {pipeline_name}/
โ โโโ source/ # Source SQL files
- The first you need to do is open the directory in your terminal where you are gonna work
- Create the migration directory and inside that create a new folder with the pipeline name
- Inside the pipeline name folder add new folder call source and inside this folder create the .sql file you want to migrate. (The system accept .sql and .txt files)
- After that you are ready to make it your firts migration.
Command example
aws-proserv:~$ I need to migrate my sales pipeline from SQL Server to Redshift
# The agent will:
# 1. Ask for pipeline name: "sales"
# 2. Request source SQL files in migration/sales/source/
# 3. Generate assessment and translation
# 4. Provide interactive feedback loop
# 5. Output optimized Redshift SQL
Example 2: Debugging SQL Issues
โโโ migration/ # Migration workspace (use this as an example)
โ โโโ {pipeline_name}/
โ โโโ source/ # Source SQL files
โ โโโ target_sql/ # Translated SQL output
โ โโโ assessment/ # Migration assessments
โ โโโ expected_result/ # Test expected results
โ โโโ query_result/ # Actual test results
For debugging firts you need to extract two main things:
-
Get an expected result of your query, this is the result from your source sql. Save in expected_result folder inside your pipeline name folder.
-
Get the actual result form the target sql query, this is result you obtain after execute the same values with the target sql query. Save in query_result folder Note: The system only recieve files in .csv format.
-
For the expected_result and query_result you have to assign the same name, this name is your test case name.ย The agent use that to make the debugging procees. You could have more that one test case, the only rule you have to follow is same name for each expected_result and query_result files and not repeat test case names.
Example 2: Example on command
aws-proserv:~$ Debug my customer_analytics pipeline with test case "revenue_calculation"
# The agent will:
# 1. Compare expected vs actual results
# 2. Identify discrepancies
# 3. Fix the SQL with explanations
# 4. Save corrected version
๐๏ธ Project Structure
asap-cli/
โโโ asap/
โ โโโ cli.py # Main CLI entry point
โ โโโ agent/
โ โโโ main.py # Agent command registration
โ โโโ utils/
โ โโโ agents/
โ โ โโโ config.py # Agent configuration
โ โ โโโ factory_agent.py # Agent factory
โ โ โโโ migration_assistant_agent.py # Core agent logic
โ โ โโโ tools/
โ โ โโโ translate/ # SQL translation tools
โ โ โโโ debug/ # Debugging tools
โ โ โโโ general/ # Shared utilities
โ โโโ general/
โ โโโ ui.py # User interface utilities
โโโ migration/ # Migration workspace (use this as an example)
โ โโโ {pipeline_name}/
โ โโโ source/ # Source SQL files
โ โโโ target_sql/ # Translated SQL output
โ โโโ assessment/ # Migration assessments
โ โโโ expected_result/ # Test expected results
โ โโโ query_result/ # Actual test results
โโโ requirements.txt # Python dependencies
โโโ pyproject.toml # Package configuration
โโโ README.md # This file
๐ง Configuration
AWS Bedrock Models
ASAP CLI uses multiple Claude models for different tasks:
- Primary:
us.anthropic.claude-sonnet-4-20250514-v1:0 - Fallback:
us.anthropic.claude-3-7-sonnet-20250219-v1:0
๐ Documentation
Core Components
- MigrationAssistantAgent: Main conversational agent with tool integration
- TranslateSQL Tool: Handles SQL dialect conversion and optimization
- DebugSQL Tool: Identifies and fixes SQL issues with explanations
- BedrockLLamager: Manages AWS Bedrock model interactions with failover
Migration Process
-
Assessment Phase:
- Analyzes source SQL complexity
- Identifies compatibility issues
- Maps business rules and requirements
-
Translation Phase:
- Converts SQL to target dialect
- Applies AWS-specific optimizations
- Preserves semantic accuracy
-
Validation Phase:
- Compares expected vs actual results
- Provides debugging assistance
- Generates corrected SQL with explanations
๐ Troubleshooting
Common Issues
"Could not load agent command"
# Ensure Python path includes the project directory
export PYTHONPATH="${PYTHONPATH}:$(pwd)"
AWS Bedrock Access Denied
# Verify AWS credentials and Bedrock access
aws bedrock list-foundation-models --region us-east-1
Migration Files Not Found
# Ensure proper directory structure
mkdir -p migration/{pipeline_name}/{source,target_sql,assessment,expected_result,query_result}
๐ Supported Platforms
Source Platforms
- Oracle (PL/SQL, Oracle SQL)
- Teradata (Teradata SQL)
- Microsoft SQL Server (T-SQL)
- Generic SQL (ANSI SQL)
- Other SQL platforms (ANSI SQL)
Target Platforms
- Amazon Redshift (PostgreSQL-based)
- Spark SQL (Apache Spark)
- AWS Glue (PySpark/Scala)
- Amazon Athena (Presto SQL)
๐ฅ Team
Developed by: AWS ProServ Data & Analytics Latam Team
Author: Jorge Sierra (jorsie@amazon.com)
Version: 0.0.1
๐ Support
For support and questions:
- Internal AWS Users: Contact the ProServ Data & Analytics Latam Team, jorsie on slack
- Issues: Use the internal GitLab issue tracker
๐ Ready to accelerate your data migration journey with AWS ProServ expertise!
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file asap_cli-0.0.3.tar.gz.
File metadata
- Download URL: asap_cli-0.0.3.tar.gz
- Upload date:
- Size: 26.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
207aa305320cf8649ace41aa4439c1ae6d0daf754d132f28c6f6fcc03a4a90e2
|
|
| MD5 |
500e424aab87c77430c2be2026c04686
|
|
| BLAKE2b-256 |
8efb07cde666342388e3050e977e96be4ba4b2c03b32876bcc2e5ff93e7c2f1f
|
File details
Details for the file asap_cli-0.0.3-py3-none-any.whl.
File metadata
- Download URL: asap_cli-0.0.3-py3-none-any.whl
- Upload date:
- Size: 26.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
e0b0aaca4ac2bb51102ce2769d909262b9eea49d8ada6c7761890018948d48d5
|
|
| MD5 |
ad0b07c45dceb4d1d25b879d1d780c1e
|
|
| BLAKE2b-256 |
9e92b4bad13f7ddd9711107f952336608cd83a2ee02950c692ce44f48e3b43d2
|