A universal bridge to connect, map, and automate data transfer between any two REST APIs
Project description
ApiLinker
A universal bridge to connect, map, and automate data transfer between any two REST APIs
ApiLinker is an open-source Python package that simplifies the integration of REST APIs by providing a universal bridging solution. Built for developers, data engineers, and researchers who need to connect different systems without writing repetitive boilerplate code.
๐ Features
- ๐ Universal Connectivity - Connect any two REST APIs with simple configuration
- ๐บ๏ธ Powerful Mapping - Transform data between APIs with field mapping and path expressions
- ๐ Data Transformation - Apply built-in or custom transformations to your data
- ๐ Authentication & Security - Support for API Key, Bearer Token, Basic Auth, and multiple OAuth2 flows (including PKCE and Device Flow). Optional secure credential storage and role-based access control.
- ๐ Enterprise Secret Management - Integrate with HashiCorp Vault, AWS Secrets Manager, Azure Key Vault, and Google Secret Manager for secure credential storage
- ๐ Flexible Configuration - Use YAML/JSON or configure programmatically in Python
- ๐ Automated Scheduling - Run syncs once, on intervals, or using cron expressions
- ๐ Schema Validation - JSON Schema validation for responses and requests, with optional strict mode and readable diffs
- ๐ Plugin Architecture - Extend with custom connectors, transformers, and authentication methods
- ๐ Pagination Handling - Automatic handling of paginated API responses
- ๐ Robust Error Handling - Circuit breakers, Dead Letter Queues (DLQ), and configurable recovery strategies
- ๐ Observability & Monitoring - OpenTelemetry integration with distributed tracing and Prometheus metrics export
- ๐งฌ Scientific Connectors - Built-in connectors for research APIs (NCBI/PubMed, arXiv) with domain-specific functionality
- ๐ฆ Minimal Dependencies - Lightweight core with minimal external requirements
Security
APILinker provides security features to protect your API credentials and data:
Role-Based Access Control
# Enable multi-user access with different permission levels
linker = ApiLinker(
security_config={
"enable_access_control": True,
"users": [
{"username": "admin1", "role": "admin"},
{"username": "viewer1", "role": "viewer"}
]
}
)
For more details, see the Security Documentation. Note: ApiLinker defaults to no request/response encryption and recommends HTTPS and provider-recommended authentication. Optional request/response encryption utilities are available for advanced scenarios; review the security docs before enabling.
๐ Table of Contents
- Installation
- Quick Start
- Configuration
- Authentication Methods
- Secret Management
- Field Mapping
- Error Handling
- Observability & Monitoring
- Data Transformations
- Scheduling
- Command Line Interface
- Schema Validation and Strict Mode
- Python API
- Examples
- Extending ApiLinker
- Roadmap
- Contributing
- Documentation
- License
๐ Installation
Standard Installation
Install ApiLinker using pip (Python's package manager):
pip install apilinker
If you're using Windows, you might need to use:
py -m pip install apilinker
Make sure you have Python 3.8 or newer installed. To check your Python version:
python --version
# or
py --version
Development Installation
To install from source (for contributing or customizing):
# Clone the repository
git clone https://github.com/kkartas/apilinker.git
cd apilinker
# Install in development mode with dev dependencies
pip install -e ".[dev]"
# Install with documentation tools
pip install -e ".[docs]"
Verifying Installation
To verify ApiLinker is correctly installed, run:
python -c "import apilinker; print(apilinker.__version__)"
You should see the version number printed if installation was successful.
๐ฏ Beginner's Guide
New to API integration? Follow this step-by-step guide to get started with ApiLinker.
Step 1: Install ApiLinker
pip install apilinker
Step 2: Create Your First API Connection
Let's connect to a public API (Weather API) and print some data:
from apilinker import ApiLinker
# Create an API connection
linker = ApiLinker()
# Configure a simple source
linker.add_source(
type="rest",
base_url="https://api.openweathermap.org/data/2.5",
endpoints={
"get_weather": {
"path": "/weather",
"method": "GET",
"params": {
"q": "London",
"appid": "YOUR_API_KEY" # Get a free key at openweathermap.org
}
}
}
)
# Fetch data from the API
weather_data = linker.fetch("get_weather")
# Print results
print(f"Temperature: {weather_data['main']['temp']} K")
print(f"Conditions: {weather_data['weather'][0]['description']}")
Step 3: Save the Script and Run It
Save the above code as weather.py and run it:
python weather.py
Step 4: Try a Data Transformation
Let's convert the temperature from Kelvin to Celsius:
# Add this to your script
def kelvin_to_celsius(kelvin_value):
return kelvin_value - 273.15
linker.mapper.register_transformer("kelvin_to_celsius", kelvin_to_celsius)
# Get the temperature in Celsius
temp_kelvin = weather_data['main']['temp']
temp_celsius = linker.mapper.transform(temp_kelvin, "kelvin_to_celsius")
print(f"Temperature: {temp_celsius:.1f}ยฐC")
Common Beginner Issues
- ImportError: Make sure ApiLinker is installed (
pip install apilinker) - API Key errors: Register for a free API key at the service you're using
- Connection errors: Check your internet connection and API endpoint URL
- TypeError: Make sure you're passing the correct data types to functions
๐ Quick Start
Using the CLI
Create a configuration file config.yaml:
source:
type: rest
base_url: https://api.example.com/v1
auth:
type: bearer
token: ${SOURCE_API_TOKEN} # Reference environment variable
endpoints:
list_items:
path: /items
method: GET
params:
updated_since: "{{last_sync}}" # Template variable
pagination:
data_path: data
next_page_path: meta.next_page
page_param: page
target:
type: rest
base_url: https://api.destination.com/v2
auth:
type: api_key
header: X-API-Key
key: ${TARGET_API_KEY}
endpoints:
create_item:
path: /items
method: POST
mapping:
- source: list_items
target: create_item
fields:
- source: id
target: external_id
- source: name
target: title
- source: description
target: body.content
- source: created_at
target: metadata.created
transform: iso_to_timestamp
# Conditional field mapping
- source: tags
target: labels
condition:
field: tags
operator: exists
transform: lowercase
schedule:
type: interval
minutes: 60
logging:
level: INFO
file: apilinker.log
Run a sync with:
apilinker sync --config config.yaml
Run a dry run to see what would happen without making changes:
apilinker sync --config config.yaml --dry-run
Run a scheduled sync based on the configuration:
apilinker run --config config.yaml
Probe schemas and suggest a starter mapping from example payloads:
apilinker probe-schema --source source_sample.json --target target_sample.json
Using as a Python Library
from apilinker import ApiLinker
# Initialize with config file
linker = ApiLinker(config_path="config.yaml")
# Or configure programmatically
linker = ApiLinker()
# Step 1: Set up your source API connection
linker.add_source(
type="rest", # API type (REST is most common)
base_url="https://api.github.com", # Base URL of the API
auth={ # Authentication details
"type": "bearer", # Using bearer token authentication
"token": "${GITHUB_TOKEN}" # Reference to an environment variable
},
endpoints={ # Define API endpoints
"list_issues": { # A name you choose for this endpoint
"path": "/repos/owner/repo/issues", # API path
"method": "GET", # HTTP method
"params": {"state": "all"} # Query parameters
}
}
)
# Step 2: Set up your target API connection
linker.add_target(
type="rest",
base_url="https://gitlab.com/api/v4",
auth={
"type": "bearer",
"token": "${GITLAB_TOKEN}"
},
endpoints={
"create_issue": {
"path": "/projects/123/issues",
"method": "POST" # This endpoint will receive data
}
}
)
# Step 3: Define how data maps from source to target
linker.add_mapping(
source="list_issues", # Source endpoint name (from Step 1)
target="create_issue", # Target endpoint name (from Step 2)
fields=[ # Field mapping instructions
{"source": "title", "target": "title"}, # Map source title โ target title
{"source": "body", "target": "description"} # Map source body โ target description
]
)
# Step 4: Execute the sync (one-time)
result = linker.sync()
print(f"Synced {result.count} records")
# Step 5 (Optional): Set up scheduled syncing
linker.add_schedule(interval_minutes=60) # Run every hour
linker.start_scheduled_sync()
Step-by-Step Explanation:
- Import the library:
from apilinker import ApiLinker - Create an instance:
linker = ApiLinker() - Configure source API: Define where to get data from
- Configure target API: Define where to send data to
- Create mappings: Define how fields translate between APIs
- Run the sync: Either once or on a schedule
๐ง Configuration
ApiLinker uses a YAML configuration format with these main sections:
Source and Target API Configuration
Both source and target sections follow the same format:
source: # or target:
type: rest # API type
base_url: https://api.example.com/v1 # Base URL
auth: # Authentication details
# ...
endpoints: # API endpoints
# ...
timeout: 30 # Request timeout in seconds (optional)
retry_count: 3 # Number of retries (optional)
Authentication Methods
ApiLinker supports multiple authentication methods:
# API Key Authentication
auth:
type: api_key
key: your_api_key # Or ${API_KEY_ENV_VAR}
header: X-API-Key # Header name
# Bearer Token Authentication
auth:
type: bearer
token: your_token # Or ${TOKEN_ENV_VAR}
# Basic Authentication
auth:
type: basic
username: your_username # Or ${USERNAME_ENV_VAR}
password: your_password # Or ${PASSWORD_ENV_VAR}
# OAuth2 Client Credentials
auth:
type: oauth2_client_credentials
client_id: your_client_id # Or ${CLIENT_ID_ENV_VAR}
client_secret: your_client_secret # Or ${CLIENT_SECRET_ENV_VAR}
token_url: https://auth.example.com/token
scope: read write # Optional
๐ Secret Management
APILinker provides enterprise-grade secret management with support for multiple cloud secret storage providers. This feature enables you to securely store and retrieve API credentials without hardcoding them in configuration files.
Supported Providers
- HashiCorp Vault - Enterprise secret management with KV v1/v2 support
- AWS Secrets Manager - AWS native secret storage with automatic rotation
- Azure Key Vault - Azure native secret management with managed identity support
- Google Secret Manager - GCP native secret storage with workload identity
- Environment Variables - Fallback for development (not recommended for production)
Quick Start
- Install secret provider dependencies (optional):
# HashiCorp Vault
pip install hvac
# AWS Secrets Manager
pip install boto3
# Azure Key Vault
pip install azure-keyvault-secrets azure-identity
# Google Secret Manager
pip install google-cloud-secret-manager
- Configure secret management in your YAML:
# Secret Management Configuration
secrets:
provider: vault # or aws, azure, gcp, env
vault:
url: "http://localhost:8200"
token: "hvs.CAESI..."
mount_point: "secret"
kv_version: 2
rotation_strategy: "manual"
enable_least_privilege: true
# Reference secrets using secret:// prefix
source:
type: rest
base_url: "https://api.source.com"
auth:
type: api_key
key: "secret://apilinker/source-api-key" # Vault path
header: "X-API-Key"
target:
type: rest
base_url: "https://api.target.com"
auth:
type: oauth2
client_id: "secret://target-oauth-client-id"
client_secret: "secret://target-oauth-client-secret"
token_url: "https://auth.target.com/token"
- Or use programmatically in Python:
from apilinker import ApiLinker
linker = ApiLinker(
secret_manager_config={
"provider": "aws",
"aws": {
"region_name": "us-east-1",
# Uses IAM role automatically
},
"rotation_strategy": "auto",
},
source_config={
"type": "rest",
"base_url": "https://api.example.com",
"auth": {\n "type": "api_key",\n "key": "secret://my-api-key",\n "header": "X-API-Key",\n },,
},
)
Provider-Specific Setup
HashiCorp Vault
secrets:
provider: vault
vault:
url: "http://localhost:8200"
token: "hvs.CAESI..." # Or use AppRole
# role_id: "your-role-id"
# secret_id: "your-secret-id"
mount_point: "secret"
kv_version: 2
cache_ttl_seconds: 300
AWS Secrets Manager
secrets:
provider: aws
aws:
region_name: "us-east-1"
# Uses IAM role/instance profile automatically
# Or provide explicit credentials (not recommended for production)
IAM Policy (Least Privilege):
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": ["secretsmanager:GetSecretValue"],
"Resource": "arn:aws:secretsmanager:us-east-1:*:secret:apilinker/*"
}]
}
Azure Key Vault
secrets:
provider: azure
azure:
vault_url: "https://mykeyvault.vault.azure.net/"
# Uses Managed Identity automatically
# Or provide service principal credentials
RBAC Permission: Assign Key Vault Secrets User role to your managed identity.
Google Secret Manager
secrets:
provider: gcp
gcp:
project_id: "my-gcp-project"
# Uses Workload Identity/Application Default Credentials
IAM Permission: Grant roles/secretmanager.secretAccessor role.
Security Best Practices
- Use Managed Identities: Prefer workload identity, IAM roles, and managed identities over static credentials
- Enable Rotation: Use automatic rotation for production secrets
- Least Privilege: Grant only necessary permissions (read-only for most cases)
- Never Commit Secrets: Always use secret references (
secret://), never hardcode credentials - Use TLS: Always use HTTPS for production API connections
- Cache Wisely: Balance security (shorter cache) vs performance (longer cache)
For more details, see the Secret Management Examples and provider-specific configurations:
Field Mapping
Mappings define how data is transformed between source and target:
mapping:
- source: source_endpoint_name
target: target_endpoint_name
fields:
# Simple field mapping
- source: id
target: external_id
# Nested field mapping
- source: user.profile.name
target: user_name
# With transformation
- source: created_at
target: timestamp
transform: iso_to_timestamp
# Multiple transformations
- source: description
target: summary
transform:
- strip
- lowercase
# Conditional mapping
- source: status
target: active_status
condition:
field: status
operator: eq # eq, ne, exists, not_exists, gt, lt
value: active
Schema Validation and Strict Mode
Validate source responses and target requests against JSON Schemas, and optionally enable strict mode to fail early when mismatches occur.
source:
endpoints:
list_items:
path: /items
method: GET
response_schema:
type: object
properties:
data:
type: array
items:
type: object
properties:
id: { type: string }
name: { type: string }
target:
endpoints:
create_item:
path: /items
method: POST
request_schema:
type: object
properties:
external_id: { type: string }
title: { type: string }
required: [external_id, title]
validation:
strict_mode: true # Fail sync if target payloads do not satisfy the request schema
CLI to infer minimal schemas and a starter mapping from samples:
apilinker probe-schema --source src_sample.json --target tgt_sample.json
๐ Data Transformations
ApiLinker provides built-in transformers for common operations:
| Transformer | Description |
|---|---|
iso_to_timestamp |
Convert ISO date to Unix timestamp |
timestamp_to_iso |
Convert Unix timestamp to ISO date |
lowercase |
Convert string to lowercase |
uppercase |
Convert string to uppercase |
strip |
Remove whitespace from start/end |
to_string |
Convert value to string |
to_int |
Convert value to integer |
to_float |
Convert value to float |
to_bool |
Convert value to boolean |
default_empty_string |
Return empty string if null |
default_zero |
Return 0 if null |
none_if_empty |
Return null if empty string |
You can also create custom transformers:
def phone_formatter(value):
"""Format phone numbers to E.164 format."""
if not value:
return None
digits = re.sub(r'\D', '', value)
if len(digits) == 10:
return f"+1{digits}"
return f"+{digits}"
# Register with ApiLinker
linker.mapper.register_transformer("phone_formatter", phone_formatter)
๐งฌ Comprehensive Research Connector Ecosystem
ApiLinker includes 8 specialized research connectors covering scientific literature, chemical data, researcher profiles, code repositories, and more:
๐ฌ Scientific Literature & Data
- NCBI (PubMed, GenBank) - Biomedical literature and genetic sequences
- arXiv - Academic preprints across all sciences
- CrossRef - Citation data and DOI resolution
- Semantic Scholar - AI-powered academic search with citation analysis
๐งช Chemical & Biological Data
- PubChem - Chemical compounds, bioassays, and drug discovery data
- ORCID - Researcher profiles and academic credentials
๐ป Code & Implementation Research
- GitHub - Code repositories, contribution analysis, and software research
- NASA - Earth science, climate data, and space research
Quick Start with Multiple Connectors
from apilinker import (
NCBIConnector, ArXivConnector, CrossRefConnector,
SemanticScholarConnector, PubChemConnector, ORCIDConnector,
GitHubConnector, NASAConnector
)
# Initialize research connectors
ncbi = NCBIConnector(email="researcher@university.edu")
arxiv = ArXivConnector()
semantic = SemanticScholarConnector(api_key="optional")
pubchem = PubChemConnector()
github = GitHubConnector(token="optional")
# Cross-platform drug discovery research
topic = "BRCA1 inhibitors"
# Literature search
pubmed_papers = ncbi.search_pubmed(topic, max_results=50)
ai_papers = semantic.search_papers(f"machine learning {topic}", max_results=30)
# Chemical compound analysis
compounds = pubchem.search_compounds("BRCA1 inhibitor")
# Implementation code
github_repos = github.search_repositories(f"{topic} drug discovery", language="Python")
print(f"PubMed papers: {len(pubmed_papers.get('esearchresult', {}).get('idlist', []))}")
print(f"AI/ML papers: {len(ai_papers.get('data', []))}")
print(f"GitHub repositories: {len(github_repos.get('items', []))}")
Interdisciplinary Research Workflows
from apilinker import ApiLinker
# Climate science + AI research
linker = ApiLinker()
# Combine NASA climate data with arXiv ML papers
nasa = NASAConnector(api_key="nasa_key")
arxiv = ArXivConnector()
# Get earth observation data
climate_data = nasa.get_earth_imagery(lat=40.7128, lon=-74.0060)
# Find AI methods for climate analysis
ml_climate_papers = arxiv.search_papers("machine learning climate", max_results=100)
# Researcher collaboration analysis
orcid = ORCIDConnector()
climate_researchers = orcid.search_by_research_area(["climate science", "machine learning"])
print(f"Climate data sources: {len(climate_data)}")
print(f"ML climate papers: {len(ml_climate_papers)}")
print(f"Researchers found: {len(climate_researchers.get('result', []))}")
๐ Examples
Jupyter Notebook Tutorial
For an interactive tutorial with visualizations, check out the Jupyter notebook:
- ApiLinker Research Tutorial - Interactive notebook demonstrating research workflows, data visualization, and cross-platform research
- Try it online: Click the Binder badge above to run the notebook in your browser (no installation required!)
GitHub to GitLab Issue Migration
from apilinker import ApiLinker
# Configure ApiLinker
linker = ApiLinker(
source_config={
"type": "rest",
"base_url": "https://api.github.com",
"auth": {"type": "bearer", "token": github_token},
"endpoints": {
"list_issues": {
"path": f"/repos/{owner}/{repo}/issues",
"method": "GET",
"params": {"state": "all"},
"headers": {"Accept": "application/vnd.github.v3+json"}
}
}
},
target_config={
"type": "rest",
"base_url": "https://gitlab.com/api/v4",
"auth": {"type": "bearer", "token": gitlab_token},
"endpoints": {
"create_issue": {
"path": f"/projects/{project_id}/issues",
"method": "POST"
}
}
}
)
# Custom transformer for labels
linker.mapper.register_transformer(
"github_labels_to_gitlab",
lambda labels: [label["name"] for label in labels] if labels else []
)
# Add mapping
linker.add_mapping(
source="list_issues",
target="create_issue",
fields=[
{"source": "title", "target": "title"},
{"source": "body", "target": "description"},
{"source": "labels", "target": "labels", "transform": "github_labels_to_gitlab"},
{"source": "state", "target": "state"}
]
)
# Run the migration
result = linker.sync()
print(f"Migrated {result.count} issues from GitHub to GitLab")
More Examples
See the examples directory for more use cases:
-
Jupyter Notebook Tutorial - Interactive research workflow examples with visualizations
-
Comprehensive Research Examples - All 8 research connectors in action
-
Salesforce to HubSpot contact sync
-
CSV file to REST API import
-
Weather API data collection
-
Custom plugin development
๐ Observability & Monitoring
APILinker provides comprehensive observability through OpenTelemetry integration, enabling distributed tracing and Prometheus metrics export for production monitoring.
Features
- Distributed Tracing: Track sync operations across your entire data pipeline with correlation IDs
- Prometheus Metrics: Export counters and histograms for sync operations, API calls, and errors
- Graceful Degradation: Works without OpenTelemetry installed (optional dependency)
- Console Export: Debug mode for development and testing
- Production Ready: Industry-standard observability with minimal overhead
Quick Start
1. Install OpenTelemetry (Optional)
pip install opentelemetry-api opentelemetry-sdk
pip install opentelemetry-exporter-prometheus prometheus-client
2. Enable Observability
from apilinker import ApiLinker
linker = ApiLinker(
source_config={"type": "rest", "base_url": "https://api.source.com"},
target_config={"type": "rest", "base_url": "https://api.target.com"},
observability_config={
"enabled": True,
"service_name": "my-data-pipeline",
"enable_tracing": True,
"enable_metrics": True,
"export_to_prometheus": True,
"prometheus_port": 9090
}
)
# Sync operations are automatically traced and measured
result = linker.sync()
3. Access Metrics
# View Prometheus metrics
curl http://localhost:9090/metrics
Available Metrics
apilinker.sync.count: Total sync operations (labeled by success/failure)apilinker.sync.duration: Sync operation duration histogram (milliseconds)apilinker.api_call.count: Total API calls (labeled by operation type)apilinker.api_call.duration: API call duration histogram (milliseconds)apilinker.error.count: Total errors (labeled by category and operation)
YAML Configuration
observability:
enabled: true
service_name: "apilinker-production"
enable_tracing: true
enable_metrics: true
export_to_console: false # Set true for debugging
export_to_prometheus: true
prometheus_host: "0.0.0.0"
prometheus_port: 9090
Grafana Dashboard
- Add Prometheus as a data source in Grafana
- Query metrics:
apilinker_sync_count,apilinker_sync_duration_bucket, etc. - Create visualizations for:
- Sync success/failure rates
- Operation latencies (p50, p95, p99)
- Error rates by category
- API call volumes
Performance Impact
- Disabled: Zero overhead
- Without OpenTelemetry: Negligible (no-op operations)
- Full Observability: <1% CPU overhead for typical workloads
For detailed documentation, see examples/observability_example.py and OBSERVABILITY_IMPLEMENTATION.md.
๐ป Common Use Cases with Examples
1. Sync Data Between Two APIs
This example shows how to sync customer data from CRM to a marketing platform:
from apilinker import ApiLinker
import os
# Set environment variables securely before running
# os.environ["CRM_API_KEY"] = "your_crm_api_key"
# os.environ["MARKETING_API_KEY"] = "your_marketing_api_key"
# Initialize ApiLinker
linker = ApiLinker()
# Configure CRM source
linker.add_source(
type="rest",
base_url="https://api.crm-platform.com/v2",
auth={
"type": "api_key",
"header": "X-API-Key",
"key": "${CRM_API_KEY}" # Uses environment variable
},
endpoints={
"get_customers": {
"path": "/customers",
"method": "GET",
"params": {"last_modified_after": "2023-01-01"}
}
}
)
# Configure marketing platform target
linker.add_target(
type="rest",
base_url="https://api.marketing-platform.com/v1",
auth={
"type": "api_key",
"header": "Authorization",
"key": "${MARKETING_API_KEY}" # Uses environment variable
},
endpoints={
"create_contact": {
"path": "/contacts",
"method": "POST"
}
}
)
# Define field mapping with transformations
linker.add_mapping(
source="get_customers",
target="create_contact",
fields=[
{"source": "id", "target": "external_id"},
{"source": "first_name", "target": "firstName"},
{"source": "last_name", "target": "lastName"},
{"source": "email", "target": "emailAddress"},
{"source": "phone", "target": "phoneNumber", "transform": "format_phone"},
# Custom field creation with default value
{"target": "source", "value": "CRM Import"}
]
)
# Register a custom transformer for phone formatting
def format_phone(phone):
if not phone:
return ""
# Remove non-digits
digits = ''.join(c for c in phone if c.isdigit())
# Format as (XXX) XXX-XXXX for US numbers
if len(digits) == 10:
return f"({digits[0:3]}) {digits[3:6]}-{digits[6:10]}"
return phone
linker.mapper.register_transformer("format_phone", format_phone)
# Execute the sync
result = linker.sync()
print(f"Synced {result.count} customers to marketing platform")
2. Scheduled Data Collection
This example collects weather data hourly and saves to a CSV file:
from apilinker import ApiLinker
import csv
import datetime
import time
import os
# Create a function to handle the collected data
def save_weather_data(data, city):
timestamp = datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S")
# Create CSV if it doesn't exist
file_exists = os.path.isfile(f"{city}_weather.csv")
with open(f"{city}_weather.csv", mode='a', newline='') as file:
writer = csv.writer(file)
# Write header if file is new
if not file_exists:
writer.writerow(["timestamp", "temperature", "humidity", "conditions"])
# Write data
writer.writerow([
timestamp,
data['main']['temp'] - 273.15, # Convert K to C
data['main']['humidity'],
data['weather'][0]['description']
])
print(f"Weather data saved for {city} at {timestamp}")
# Initialize ApiLinker
linker = ApiLinker()
# Configure weather API
linker.add_source(
type="rest",
base_url="https://api.openweathermap.org/data/2.5",
endpoints={
"get_london_weather": {
"path": "/weather",
"method": "GET",
"params": {
"q": "London,uk",
"appid": "YOUR_API_KEY" # Replace with your API key
}
},
"get_nyc_weather": {
"path": "/weather",
"method": "GET",
"params": {
"q": "New York,us",
"appid": "YOUR_API_KEY" # Replace with your API key
}
}
}
)
# Create a custom handler for the weather data
def collect_weather():
london_data = linker.fetch("get_london_weather")
nyc_data = linker.fetch("get_nyc_weather")
save_weather_data(london_data, "London")
save_weather_data(nyc_data, "NYC")
# Run once to test
collect_weather()
# Then schedule to run hourly
linker.add_schedule(interval_minutes=60, callback=collect_weather)
linker.start_scheduled_sync()
# Keep the script running
try:
print("Weather data collection started. Press Ctrl+C to stop.")
while True:
time.sleep(60)
except KeyboardInterrupt:
print("Weather data collection stopped.")
๐ Extending ApiLinker
Creating Custom Plugins
ApiLinker can be extended through plugins. Here's how to create a custom transformer plugin:
from apilinker.core.plugins import TransformerPlugin
class SentimentAnalysisTransformer(TransformerPlugin):
"""A transformer plugin that analyzes text sentiment."""
plugin_name = "sentiment_analysis" # This name is used to reference the plugin
version = "1.0.0" # Optional version information
author = "Your Name" # Optional author information
def transform(self, value, **kwargs):
# Simple sentiment analysis (example)
if not value or not isinstance(value, str):
return {"sentiment": "neutral", "score": 0.0}
# Add your sentiment analysis logic here
positive_words = ["good", "great", "excellent"]
negative_words = ["bad", "poor", "terrible"]
# Count positive and negative words
text = value.lower()
positive_count = sum(1 for word in positive_words if word in text)
negative_count = sum(1 for word in negative_words if word in text)
# Calculate sentiment score
total = positive_count + negative_count
score = 0.0 if total == 0 else (positive_count - negative_count) / total
return {
"sentiment": "positive" if score > 0 else "negative" if score < 0 else "neutral",
"score": score
}
Using Your Custom Plugin
After creating your plugin, you need to register it before using:
from apilinker import ApiLinker
# Create your custom plugin instance
from my_plugins import SentimentAnalysisTransformer
# Initialize ApiLinker
linker = ApiLinker()
# Register the plugin
linker.plugin_manager.register_plugin(SentimentAnalysisTransformer)
# Configure APIs and mappings...
linker.add_mapping(
source="get_reviews",
target="save_analysis",
fields=[
{"source": "user_id", "target": "user_id"},
# Use your custom plugin to transform the review text
{"source": "review_text", "target": "sentiment_data", "transform": "sentiment_analysis"}
]
)
โ Troubleshooting Guide
Installation Issues
-
Package not found error
ERROR: Could not find a version that satisfies the requirement apilinker- Make sure you're using Python 3.8 or newer
- Check your internet connection
- Try upgrading pip:
pip install --upgrade pip
-
Import errors
ImportError: No module named 'apilinker'
- Verify installation:
pip list | grep apilinker - Check if you're using the correct Python environment
- Try reinstalling:
pip install --force-reinstall apilinker
- Verify installation:
Connection Issues
-
API connection failures
ConnectionError: Failed to establish connection to api.example.com- Check your internet connection
- Verify the API base URL is correct
- Make sure the API service is online
- Check if your IP is allowed by the API provider
-
Authentication errors
AuthenticationError: Invalid credentials- Verify your API key or token is correct
- Check if the token has expired
- Ensure you're using the correct authentication method
Mapping Issues
-
Field not found errors
KeyError: 'Field not found in source data: user_profile'- Check the actual response data structure
- Make sure you're referencing the correct field names
- For nested fields, use dot notation (e.g.,
user.profile.name)
-
Transformation errors
ValueError: Invalid data for transformer 'iso_to_timestamp'- Check if the data matches the expected format
- Make sure the transformer is properly registered
- Add validation to your custom transformers
Common Code Examples
๐ Documentation
Documentation is available in the /docs directory and at Read the Docs: https://apilinker.readthedocs.io/
Core Documentation
- Getting Started - A beginner-friendly introduction
- Installation Guide - Detailed installation instructions
- Configuration Guide - Configuration options and formats
- API Reference - Detailed API reference
Quick Resources
- Quick Reference - Essential commands and patterns
- FAQ - Frequently asked questions
- Troubleshooting Guide - Solutions to common problems
Guides and Examples
- Cookbook - Ready-to-use recipes for common tasks
- Examples - Example use cases and code
- Extending with Plugins - Creating and using plugins
- Security Considerations - Security best practices (no custom encryption or built-in rate limiting)
Technical Documentation
- Architecture - System architecture and data flow diagrams
- Comparison - How ApiLinker compares to other integration tools
Step-by-Step Tutorials
- API-to-API Sync Tutorial - Learn to sync data between APIs
- Custom Transformers Tutorial - Create data transformation functions
- More tutorials - Browse all available tutorials
Comprehensive API Reference
For developers who want to extend ApiLinker or understand its internals, we provide comprehensive API reference documentation that can be generated using Sphinx:
# Install Sphinx and required packages
pip install sphinx sphinx-rtd-theme myst-parser
# Generate HTML documentation
cd docs/sphinx_setup
sphinx-build -b html . _build/html
The generated documentation will be available in docs/sphinx_setup/_build/html/index.html
๐จโ๐ป Development
Version Management
APILinker uses bump-my-version for automated version bumping. See VERSION_MANAGEMENT.md for details.
# Quick version bump (Windows)
.\bump.ps1 patch # 0.5.0 -> 0.5.1
.\bump.ps1 minor # 0.5.0 -> 0.6.0
.\bump.ps1 major # 0.5.0 -> 1.0.0
# Quick version bump (Linux/Mac)
./bump.sh patch
This automatically updates version in all 14 files, creates a git commit, and tags the release.
Running Tests
# Run all tests with coverage
pytest --cov=apilinker --cov-report=term
# Run specific test file
pytest tests/test_api_linker.py -v
# Run with coverage threshold
pytest --cov=apilinker --cov-fail-under=80
Code Quality
# Format code
black .
# Lint
flake8 apilinker tests
# Type checking
mypy apilinker
Community Support
- GitHub Issues - Report bugs or request features
- Stack Overflow - Ask questions using the
apilinkertag
๐ Security Considerations
When working with APIs that require authentication, follow these security best practices:
-
Never hardcode credentials in your code or configuration files. Always use environment variables or secure credential stores.
-
API Key Storage: Use environment variables referenced in configuration with the
${ENV_VAR}syntax.auth: type: api_key header: X-API-Key key: ${MY_API_KEY}
-
OAuth Security: For OAuth flows, ensure credentials are stored securely and token refresh is handled properly.
-
Credential Validation: ApiLinker performs validation checks on authentication configurations to prevent common security issues.
-
HTTPS: Use HTTPS endpoints whenever possible to protect data in transit.
-
Audit Logging: Enable detailed logging for security-relevant events with:
logging: level: INFO security_audit: true
๐บ๏ธ Roadmap
See ROADMAP.md for a detailed view of planned features and version releases.
Upcoming Highlights:
- v0.5.0: OpenTelemetry integration, Secret Management (Vault, AWS, Azure, GCP)
- v0.6.0: Webhook connectors, Message Queue support (RabbitMQ, Redis, SQS, Kafka)
- v0.7.0: Multi-source aggregation, Enhanced incremental sync
- v0.8.0: GraphQL support, API versioning
- v0.9.0: Testing framework, Configuration templates
- v1.0.0: Production-ready milestone with performance optimizations
๐ค Contributing
Contributions are welcome! Please see our Contributing Guide for details.
- Use GitHub "New issue" to open a bug report or feature request (templates provided)
- Fork the repo and create a focused branch for changes
- Add tests and docs where applicable, then open a Pull Request
- Check ROADMAP.md for features we're planning to implement
๐ Citation
If you use ApiLinker in your research, please cite:
@software{apilinker2025,
author = {Kartas, Kyriakos},
title = {ApiLinker: A Universal Bridge for REST API Integrations},
url = {https://github.com/kkartas/apilinker},
version = {0.4.1},
year = {2025}
}
๐ License
This project is licensed under the MIT License - see the LICENSE file for details.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file apilinker-0.5.2.tar.gz.
File metadata
- Download URL: apilinker-0.5.2.tar.gz
- Upload date:
- Size: 179.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
74f869f0793b0186ca7e4f5d9970178c8d09d27885acd538d8a63dcd52a51965
|
|
| MD5 |
967a7541fffe6e4b071a0ade0909cce7
|
|
| BLAKE2b-256 |
80ed2cdfc5564a6003922380add76e04cae3c5ff6be375759aefdc976eee7bf7
|
Provenance
The following attestation bundles were made for apilinker-0.5.2.tar.gz:
Publisher:
release.yml on kkartas/APILinker
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
apilinker-0.5.2.tar.gz -
Subject digest:
74f869f0793b0186ca7e4f5d9970178c8d09d27885acd538d8a63dcd52a51965 - Sigstore transparency entry: 709806061
- Sigstore integration time:
-
Permalink:
kkartas/APILinker@a925ed697904658fbd7805de5f773b6fe3c5f6d2 -
Branch / Tag:
refs/tags/v0.5.2 - Owner: https://github.com/kkartas
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@a925ed697904658fbd7805de5f773b6fe3c5f6d2 -
Trigger Event:
push
-
Statement type:
File details
Details for the file apilinker-0.5.2-py3-none-any.whl.
File metadata
- Download URL: apilinker-0.5.2-py3-none-any.whl
- Upload date:
- Size: 120.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b31e1b1e4c54133ac44532d841c6c349ae00f95f1e109d6c7036327141da0e32
|
|
| MD5 |
c3b0afcfd89f0320f06e4ed52e1f2b46
|
|
| BLAKE2b-256 |
e9891feafc5200337d19a7ce6bd58c72a5962371adf25c3c9c52cf07061eedcc
|
Provenance
The following attestation bundles were made for apilinker-0.5.2-py3-none-any.whl:
Publisher:
release.yml on kkartas/APILinker
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
apilinker-0.5.2-py3-none-any.whl -
Subject digest:
b31e1b1e4c54133ac44532d841c6c349ae00f95f1e109d6c7036327141da0e32 - Sigstore transparency entry: 709806062
- Sigstore integration time:
-
Permalink:
kkartas/APILinker@a925ed697904658fbd7805de5f773b6fe3c5f6d2 -
Branch / Tag:
refs/tags/v0.5.2 - Owner: https://github.com/kkartas
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@a925ed697904658fbd7805de5f773b6fe3c5f6d2 -
Trigger Event:
push
-
Statement type: