Use the AWS Glue Schema Registry.
Project description
AWS Glue Schema Registry for Python
Use the AWS Glue Schema Registry in Python projects.
This library is a partial port of aws-glue-schema-registry which implements a subset of its features with full compatibility.
Feature Support
Feature | Java Library | Python Library | Notes |
---|---|---|---|
Serialization and deserialization using schema registry | ✔️ | ✔️ | |
Avro message format | ✔️ | ✔️ | |
JSON Schema message format | ✔️ | ❌ | |
Kafka Streams support | ✔️ | N/A for Python, Kafka Streams is Java-only | |
Compression | ✔️ | ✔️ | |
Local schema cache | ✔️ | ✔️ | |
Schema auto-registration | ✔️ | ✔️ | |
Evolution checks | ✔️ | ✔️ | |
Migration from a third party Schema Registry | ✔️ | ✔️ | |
Flink support | ✔️ | ❌ | |
Kafka Connect support | ✔️ | N/A for Python, Kafka Connect is Java-only |
Installation
Clone this repository and install it:
python setup.py install -e .
This library includes opt-in extra dependencies that enable support for certain features. For example, to use the schema registry with kafka-python, you should install the kafka-python
extra:
python setup.py install -e .[kafka-python]
Usage
First use boto3
to create a low-level AWS Glue client:
import boto3
# Pass your AWS credentials or profile information here
session = boto3.Session(access_key_id=xxx, secret_access_key=xxx, region_name='us-west-2')
glue_client = session.client('glue')
See https://boto3.amazonaws.com/v1/documentation/api/latest/guide/quickstart.html#configuration for more information on configuring boto3.
Send Kafka messages with SchemaRegistrySerializer
:
from aws_schema_registry import DataAndSchema, SchemaRegistryClient
from aws_schema_registry.avro import AvroSchema
# In this example we will use kafka-python as our Kafka client,
# so we need to have the `kafka-python` extras installed and use
# the kafka adapter.
from aws_schema_registry.adapter.kafka import SchemaRegistrySerializer
from kafka import KafkaConsumer
# Create the schema registry client, which is a façade around the boto3 glue client
client = SchemaRegistryClient(glue_client,
registry_name='my-registry')
# Create the serializer
serializer = SchemaRegistrySerializer(client)
# Create the producer
producer = KafkaProducer(value_serializer=serializer)
# Our producer needs a schema to send along with the data.
# In this example we're using Avro, so we'll load an .avsc file.
with open('user.avsc', 'r') as schema_file:
schema = AvroSchema(schema_file.read())
# Send message data along with schema
data = {
'name': 'John Doe',
'favorite_number': 6
}
producer.send('my-topic', value=DataAndSchema(data, schema))
# the value MUST be an instance of DataAndSchema when we're using the SchemaRegistrySerializer
Read Kafka messages with SchemaRegistryDeserializer
:
from aws_schema_registry import SchemaRegistryClient
# In this example we will use kafka-python as our Kafka client,
# so we need to have the `kafka-python` extras installed and use
# the kafka adapter.
from aws_schema_registry.adapter.kafka import SchemaRegistryDeserializer
from kafka import KafkaConsumer
# Create the schema registry client, which is a façade around the boto3 glue client
client = SchemaRegistryClient(glue_client,
registry_name='my-registry')
# Create the deserializer
deserializer = SchemaRegistryDeserializer(client)
# Create the consumer
consumer = KafkaConsumer('my-topic', value_deserializer=deserializer)
# Now use the consumer normally
for message in consumer:
# The deserializer produces DataAndSchema instances
value: DataAndSchema = message.value
value.data
value.schema
Contributing
Clone this repository and install development dependencies:
pip install -e .[dev]
Run the linter and tests with tox before committing. After committing, check Github Actions to see the result of the automated checks.
Linting
Lint the code with:
flake8
Run the type checker with:
mypy
Tests
Tests go under the tests/
directory. All tests outside of tests/integration
are unit tests with no external dependencies.
Tests under tests/integration
are integration test that interact with external resources and/or real AWS schema registries. They generally run slower and require some additional configuration.
Run just the unit tests with:
pytest --ignore tests/integration
All integration tests use the following environment variables:
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
AWS_SESSION_TOKEN
AWS_REGION
AWS_PROFILE
CLEANUP_REGISTRY
: Set to any value to prevent the test from destroying the registry created during the test, allowing you to inspect its contents.
If no AWS_
environment variables are set, boto3
will try to load credentials from your default AWS profile.
See individual integration test directories for additional requirements and setup instructions.
Tox
This project uses Tox to run tests across multiple Python versions.
Install Tox with:
pip install tox
and run it with:
tox
Note that Tox requires the tested python versions to be installed. One convenient way to manage this is using pyenv. See the .python-versions
file for the Python versions that need to be installed.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for aws-glue-schema-registry-1.0.0rc3.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | ffc3003a8af3588e3d9ed5056539c97ae4152dcbfa07c5d013a3caa465541947 |
|
MD5 | 236aece47a765d5ffd35a168525bf1bb |
|
BLAKE2b-256 | 6a8d54316cfd49d061a22f15ddcc3d3b823d5f7ac892660477b133cf514adfb2 |
Hashes for aws_glue_schema_registry-1.0.0rc3-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | e005a06888ad920aaba82c44bf01e311ac6fdec6e64b5e0743cf252567606198 |
|
MD5 | dbe0d0a51330e884ebfe02c3d2400852 |
|
BLAKE2b-256 | f43bab8eed4520c12034dfd5975eab9301daf97bc0676e6f872abd78e321d31d |