A utility that allows CSV import / export to DynamoDB on the command line
Project description
DynamoDB CSV utility
A utility that allows CSV import / export to DynamoDB on the command line
Give a โญ๏ธ if you like this tool!
Introduction
I made this command because I didn't have any tools to satisfy my modest desire to make it easy to import CSV files into DynamoDB. Written in a simple Python script, it's easy to parse and modify.
It works for me.
Getting started ๐
Install
$ python -m venv venv
$ . venv/bin/activate
$ pip install dynamodb-csv
$ dynamodb-csv -h
usage: dynamodb-csv [-h] [-v] [-i] [-e] [--truncate] -t TABLE [-idx INDEX] [-f FILE] [-o OUTPUT] [--ignore]
Import CSV file into DynamoDB table utilities
optional arguments:
-h, --help show this help message and exit
-v, --version show version
-i, --imp mode import
-e, --exp mode export
--truncate mode truncate
-t TABLE, --table TABLE
DynamoDB table name
-idx INDEX, --index INDEX
DynamoDB index name
-f FILE, --file FILE UTF-8 CSV file path required import mode
-o OUTPUT, --output OUTPUT
output file path required export mode
--ignore ignore import error
Install for developer
$ python -m venv venv
$ . venv/bin/activate
$ python setup.py install
$ dynamodb-csv -h
Or
$ python -m venv venv
$ . venv/bin/activate
$ pip install -r requirements-dev.txt
$ export PYTHONPATH=`pwd`
$ python app/main.py -h
For Windows
> python -m venv venv
> venv\Scripts\activate
> pip install -r requirements-dev.txt
> set PYTHONPATH=%cd%
> python app/main.py -h
Or you can use devcontainer.
Use Docker image
$ docker run --rm -v ${PWD}/:/local danishi/dynamodb-csv:tagname -i -t my_table -f sample.csv
For Windows
> docker run --rm -v %cd%/:/local danishi/dynamodb-csv:tagname -i -t my_table -f sample.csv
Create your config.ini file on current directory
[AWS]
AWS_ACCESS_KEY_ID=your_access_key
AWS_SECRET_ACCESS_KEY=your_secret_key
REGION=your_dynamodb_table_region
# Option
#ENDPOINT_URL=http://dynamodb-local:8000
Create your CSV and CSV spec file
Prepare a UTF-8 CSV file of the format you want to import into your DynamoDB table and a file that defines that format.
For example
Please refer to this writing method.
StringPK,NumberSK,DecimalValue,BooleanValue,NullValue,JsonValue,StringListValues,DecimalListValues
foo,1,1.23,TRUE,,"[{""string"" : ""value""},{""number"" : 100}]",foo bar baz,10 10.1 20
foo,2,0.001,,,"[{""boolean"" : true}]",ใชใณใด ใใใ ในใคใซ,10 10.1 20
foo,3,1,,,"[{""boolean"" : false}]",,
# sample.csv data format specification
# String : S
# Integer : I
# Decimal : D
# Boolean : B (blank false)
# Json : J
# StringList : SL
# StringSet : SS
# DecimalList : DL
# DecimalSet : DS
[CSV_SPEC]
StringPK=S
NumberSK=I
DecimalValue=D
BooleanValue=B
NullValue=S
JsonValue=J
StringListValues=SL
StringSetValues=SS
DecimalListValues=DL
DecimalSetValues=DS
Create DynamoDB table
You need to have created a DynamoDB table that meets your specifications.
$ aws dynamodb create-table --cli-input-json file://my_table.json --region ap-northeast-1
$ aws dynamodb describe-table --table-name my_table
{
"Table": {
"AttributeDefinitions": [
{
"AttributeName": "NumberSK",
"AttributeType": "N"
},
{
"AttributeName": "StringPK",
"AttributeType": "S"
}
],
"TableName": "my_table",
"KeySchema": [
{
"AttributeName": "StringPK",
"KeyType": "HASH"
},
{
"AttributeName": "NumberSK",
"KeyType": "RANGE"
}
],
"TableStatus": "ACTIVE",
"CreationDateTime": "2022-06-26T21:19:21.767000+09:00",
"ProvisionedThroughput": {
"NumberOfDecreasesToday": 0,
"ReadCapacityUnits": 5,
"WriteCapacityUnits": 5
},
"TableSizeBytes": 0,
"ItemCount": 0,
"TableArn": "arn:aws:dynamodb:ap-northeast-1:XXXXXXXXXXX:table/my_table",
"TableId": "XXXXXXXX-925b-4cb1-8e3a-604158118c3f",
"GlobalSecondaryIndexes": [
{
"IndexName": "NumberSK-index",
"KeySchema": [
{
"AttributeName": "NumberSK",
"KeyType": "HASH"
}
],
"Projection": {
"ProjectionType": "INCLUDE",
"NonKeyAttributes": [
"DecimalValue",
"JsonValue"
]
},
"IndexStatus": "ACTIVE",
"ProvisionedThroughput": {
"NumberOfDecreasesToday": 0,
"ReadCapacityUnits": 5,
"WriteCapacityUnits": 5
},
"IndexSizeBytes": 0,
"ItemCount": 0,
"IndexArn": "arn:aws:dynamodb:ap-northeast-1:XXXXXXXXXXX:table/my_table/index/NumberSK-index"
}
]
}
}
CSV import into Table
This command requires a CSV spec file in the same directory.
$ dynamodb-csv -i -t my_table -f sample.csv
please wait my_table importing sample.csv
300it [00:00, 19983.03it/s]
100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 300/300 [00:07<00:00, 40.97it/s]
my_table csv imported 300 items
It is processed at high speed by batch write.
Ignore option
If there is an error such as a key schema mismatch, you can give the option to ignore the CSV record.
$ dynamodb-csv -i -t my_table -f sample.csv --ignore
please wait my_table importing sample.csv
300it [00:00, 19983.03it/s]
100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 300/300 [00:07<00:00, 40.97it/s]
my_table csv imported 299 items and 1 error items
No batch write is done when this option is used.
Import options
By default, if CSV has an empty value, it will be set to empty.
There are options to convert this to Null or not to set the attribute itself.
[IMPORT_OPTION]
ConvertBlankToNullAttrs=NullValue,JsonValue
ConvertBlankToDropAttrs=DecimalValue
Export table to CSV
You will also need to expand the same data to multiple tables.
Therefore, data can be exported.
As with import, you need a CSV spec file.
$ dynamodb-csv -e -t my_table -o sample_exp.csv
please wait my_table exporting sample_exp.csv
100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 300/300 [00:00<00:00, 16666.77it/s]
my_table csv exported 300 items
Use index
$ dynamodb-csv -e -t my_table -idx NumberSK-index -o sample_gsi_exp.csv
Use Query
$ dynamodb-csv -e -t my_table -idx NumberSK-index -o sample_query_exp.csv
# sample_query_exp.csv data format specification
# Integer : I
# String : S
# Decimal : D
# Json : J
[QUERY_OPTION]
PKAttribute=NumberSK
PKAttributeValue=1
PKAttributeType=I
[CSV_SPEC]
NumberSK=I
StringPK=S
DecimalValue=D
JsonValue=J
Query options
PKAttribute
: Partition key attribute namePKAttributeValue
: Partition key attribute query valuePKAttributeType
: Partition key attribute data typeSKAttribute
: Sort key attribute nameSKAttributeValues
: Sort key attribute query value or values- ex.
foo
orfoo,bar
- ex.
SKAttributeType
: Sort key attribute data typeSKAttributeExpression
: Sort key attribute query expression ex.begins_with
between
eq
gt
gte
lt
lte
$ dynamodb-csv -e -t my_table -o sample_query_exp2.csv
[QUERY_OPTION]
PKAttribute=StringPK
PKAttributeValue=bar
PKAttributeType=S
SKAttribute=NumberSK
SKAttributeValues=50,100
SKAttributeType=I
SKAttributeExpression=between
Table truncate
Also, since you may want to erase unnecessary data during the import experiment, we have prepared a command to discard it.
$ dynamodb-csv --truncate -t my_table
my_table scan 300 items
please wait my_table truncating
100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 300/300 [00:07<00:00, 40.95it/s]
my_table truncated
License
See LICENSE
Special Thanks
Code contributors ๐ค
Appendix
User guide
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for dynamodb_csv-1.4.7-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | e8f4f0ebe26630546943921a38c1484201880159c6d60e63673efe429c70bf29 |
|
MD5 | 8d213185c7e0098897974543f8260a96 |
|
BLAKE2b-256 | d004b6414391359e2cd0f9dc8454c3ff0632e7ca0631e55eabefdb2db75ee79e |