Python interface to the Salesforce.com Bulk API.
Project description
MIT License
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
Description: ![travis-badge](https://travis-ci.org/heroku/salesforce-bulk.svg?branch=master)
# Salesforce Bulk
Python client library for accessing the asynchronous Salesforce.com Bulk API.
## Installation
```pip install salesforce-bulk```
## Authentication
To access the Bulk API you need to authenticate a user into Salesforce. The easiest
way to do this is just to supply `username`, `password` and `security_token`. This library
will use the `simple-salesforce` package to handle password based authentication.
```
from salesforce_bulk import SalesforceBulk
bulk = SalesforceBulk(username=username, password=password, security_token=security_token)
...
```
Alternatively if you run have access to a session ID and instance_url you can use
those directly:
```
from urlparse import urlparse
from salesforce_bulk import SalesforceBulk
bulk = SalesforceBulk(sessionId=sessionId, host=urlparse(instance_url).hostname)
...
```
## Operations
The basic sequence for driving the Bulk API is:
1. Create a new job
2. Add one or more batches to the job
3. Close the job
4. Wait for each batch to finish
## Bulk Query
`bulk.create_query_job(object_name, contentType='JSON')`
Using API v39.0 or higher, you can also use the queryAll operation:
`bulk.create_queryall_job(object_name, contentType='JSON')`
Example
```
from salesforce_bulk.util import IteratorBytesIO
import json
job = bulk.create_query_job("Contact", contentType='JSON')
batch = bulk.query(job, "select Id,LastName from Contact")
bulk.close_job(job)
while not bulk.is_batch_done(batch):
sleep(10)
for result in bulk.get_all_results_for_query_batch(batch):
result = json.load(IteratorBytesIO(result))
for row in result:
print row # dictionary rows
```
Same example but for CSV:
```
import unicodecsv
job = bulk.create_query_job("Contact", contentType='CSV')
batch = bulk.query(job, "select Id,LastName from Contact")
bulk.close_job(job)
while not bulk.is_batch_done(batch):
sleep(10)
for result in bulk.get_all_results_for_query_batch(batch):
reader = unicodecsv.DictReader(result, encoding='utf-8')
for row in reader:
print row # dictionary rows
```
Note that while CSV is the default for historical reasons, JSON should be prefered since CSV
has some drawbacks including its handling of NULL vs empty string.
## Bulk Insert, Update, Delete
All Bulk upload operations work the same. You set the operation when you create the
job. Then you submit one or more documents that specify records with columns to
insert/update/delete. When deleting you should only submit the Id for each record.
For efficiency you should use the `post_batch` method to post each batch of
data. (Note that a batch can have a maximum 10,000 records and be 1GB in size.)
You pass a generator or iterator into this function and it will stream data via
POST to Salesforce. For help sending CSV formatted data you can use the
salesforce_bulk.CsvDictsAdapter class. It takes an iterator returning dictionaries
and returns an iterator which produces CSV data.
Full example:
```
from salesforce_bulk import CsvDictsAdapter
job = bulk.create_insert_job("Account", contentType='CSV')
accounts = [dict(Name="Account%d" % idx) for idx in xrange(5)]
csv_iter = CsvDictsAdapter(iter(accounts))
batch = bulk.post_batch(job, csv_iter)
bulk.wait_for_batch(job, batch)
bulk.close_job(job)
print "Done. Accounts uploaded."
```
### Concurrency mode
When creating the job, pass `concurrency='Serial'` or `concurrency='Parallel'` to set the
concurrency mode for the job.
Platform: UNKNOWN
Classifier: Development Status :: 5 - Production/Stable
Classifier: Intended Audience :: Developers
Classifier: Natural Language :: English
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python
Classifier: Programming Language :: Python :: 2.6
Classifier: Programming Language :: Python :: 2.7
Classifier: Programming Language :: Python :: 3.3
Classifier: Programming Language :: Python :: 3.4
Classifier: Programming Language :: Python :: 3.5
Classifier: Programming Language :: Python :: 3.6
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
Description: ![travis-badge](https://travis-ci.org/heroku/salesforce-bulk.svg?branch=master)
# Salesforce Bulk
Python client library for accessing the asynchronous Salesforce.com Bulk API.
## Installation
```pip install salesforce-bulk```
## Authentication
To access the Bulk API you need to authenticate a user into Salesforce. The easiest
way to do this is just to supply `username`, `password` and `security_token`. This library
will use the `simple-salesforce` package to handle password based authentication.
```
from salesforce_bulk import SalesforceBulk
bulk = SalesforceBulk(username=username, password=password, security_token=security_token)
...
```
Alternatively if you run have access to a session ID and instance_url you can use
those directly:
```
from urlparse import urlparse
from salesforce_bulk import SalesforceBulk
bulk = SalesforceBulk(sessionId=sessionId, host=urlparse(instance_url).hostname)
...
```
## Operations
The basic sequence for driving the Bulk API is:
1. Create a new job
2. Add one or more batches to the job
3. Close the job
4. Wait for each batch to finish
## Bulk Query
`bulk.create_query_job(object_name, contentType='JSON')`
Using API v39.0 or higher, you can also use the queryAll operation:
`bulk.create_queryall_job(object_name, contentType='JSON')`
Example
```
from salesforce_bulk.util import IteratorBytesIO
import json
job = bulk.create_query_job("Contact", contentType='JSON')
batch = bulk.query(job, "select Id,LastName from Contact")
bulk.close_job(job)
while not bulk.is_batch_done(batch):
sleep(10)
for result in bulk.get_all_results_for_query_batch(batch):
result = json.load(IteratorBytesIO(result))
for row in result:
print row # dictionary rows
```
Same example but for CSV:
```
import unicodecsv
job = bulk.create_query_job("Contact", contentType='CSV')
batch = bulk.query(job, "select Id,LastName from Contact")
bulk.close_job(job)
while not bulk.is_batch_done(batch):
sleep(10)
for result in bulk.get_all_results_for_query_batch(batch):
reader = unicodecsv.DictReader(result, encoding='utf-8')
for row in reader:
print row # dictionary rows
```
Note that while CSV is the default for historical reasons, JSON should be prefered since CSV
has some drawbacks including its handling of NULL vs empty string.
## Bulk Insert, Update, Delete
All Bulk upload operations work the same. You set the operation when you create the
job. Then you submit one or more documents that specify records with columns to
insert/update/delete. When deleting you should only submit the Id for each record.
For efficiency you should use the `post_batch` method to post each batch of
data. (Note that a batch can have a maximum 10,000 records and be 1GB in size.)
You pass a generator or iterator into this function and it will stream data via
POST to Salesforce. For help sending CSV formatted data you can use the
salesforce_bulk.CsvDictsAdapter class. It takes an iterator returning dictionaries
and returns an iterator which produces CSV data.
Full example:
```
from salesforce_bulk import CsvDictsAdapter
job = bulk.create_insert_job("Account", contentType='CSV')
accounts = [dict(Name="Account%d" % idx) for idx in xrange(5)]
csv_iter = CsvDictsAdapter(iter(accounts))
batch = bulk.post_batch(job, csv_iter)
bulk.wait_for_batch(job, batch)
bulk.close_job(job)
print "Done. Accounts uploaded."
```
### Concurrency mode
When creating the job, pass `concurrency='Serial'` or `concurrency='Parallel'` to set the
concurrency mode for the job.
Platform: UNKNOWN
Classifier: Development Status :: 5 - Production/Stable
Classifier: Intended Audience :: Developers
Classifier: Natural Language :: English
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python
Classifier: Programming Language :: Python :: 2.6
Classifier: Programming Language :: Python :: 2.7
Classifier: Programming Language :: Python :: 3.3
Classifier: Programming Language :: Python :: 3.4
Classifier: Programming Language :: Python :: 3.5
Classifier: Programming Language :: Python :: 3.6
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file salesforce-bulk-2.0.0.dev6.tar.gz
.
File metadata
- Download URL: salesforce-bulk-2.0.0.dev6.tar.gz
- Upload date:
- Size: 9.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 21c8b95dbf5693ce66655b19e517bd2be4da3a62bf325f65ddaec3a80848cdac |
|
MD5 | 9d56117815f082563b33710fcfa44af0 |
|
BLAKE2b-256 | 2266e8959bec27b458daa23158346a7423f92b63d42caa4251fb3a5d3355139e |
Provenance
File details
Details for the file salesforce_bulk-2.0.0.dev6-py2.py3-none-any.whl
.
File metadata
- Download URL: salesforce_bulk-2.0.0.dev6-py2.py3-none-any.whl
- Upload date:
- Size: 12.5 kB
- Tags: Python 2, Python 3
- Uploaded using Trusted Publishing? No
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 7f3dce5f48dfdecca0bc4184c062ccfb164046aba5f0924f5793369067830c78 |
|
MD5 | e5f43f1ea3b02b70f7dd76f4dc9b90cd |
|
BLAKE2b-256 | d5475ab157214f8742cbb6f23431c1ddf3cb6fe95f3e54c796f16d011587e834 |