Skip to main content

Python interface to the Salesforce.com Bulk API.

Project description

travis-badge

Salesforce Bulk

Python client library for accessing the asynchronous Salesforce.com Bulk API.

Installation

pip install salesforce-bulk-2-7

Authentication

To access the Bulk API you need to authenticate a user into Salesforce. The easiest way to do this is just to supply username, password and security_token. This library will use the simple-salesforce package to handle password based authentication.

from salesforce-bulk-2-7 import SalesforceBulk

bulk = SalesforceBulk(username=username, password=password, security_token=security_token)
...

Alternatively if you run have access to a session ID and instance_url you can use those directly:

from urlparse import urlparse
from salesforce-bulk-2-7 import SalesforceBulk

bulk = SalesforceBulk(sessionId=sessionId, host=urlparse(instance_url).hostname)
...

Operations

The basic sequence for driving the Bulk API is:

  1. Create a new job

  2. Add one or more batches to the job

  3. Close the job

  4. Wait for each batch to finish

Bulk Query

bulk.create_query_job(object_name, contentType='JSON')

Using API v39.0 or higher, you can also use the queryAll operation:

bulk.create_queryall_job(object_name, contentType='JSON')

Example

import json
from salesforce-bulk-2-7.util import IteratorBytesIO

job = bulk.create_query_job("Contact", contentType='JSON')
batch = bulk.query(job, "select Id,LastName from Contact")
bulk.close_job(job)
while not bulk.is_batch_done(batch):
    sleep(10)

for result in bulk.get_all_results_for_query_batch(batch):
    result = json.load(IteratorBytesIO(result))
    for row in result:
        print row # dictionary rows

Same example but for CSV:

import unicodecsv

job = bulk.create_query_job("Contact", contentType='CSV')
batch = bulk.query(job, "select Id,LastName from Contact")
bulk.close_job(job)
while not bulk.is_batch_done(batch):
    sleep(10)

for result in bulk.get_all_results_for_query_batch(batch):
    reader = unicodecsv.DictReader(result, encoding='utf-8')
    for row in reader:
        print(row) # dictionary rows

Note that while CSV is the default for historical reasons, JSON should be prefered since CSV has some drawbacks including its handling of NULL vs empty string.

PK Chunk Header

If you are querying a large number of records you probably want to turn on PK Chunking:

bulk.create_query_job(object_name, contentType='CSV', pk_chunking=True)

That will use the default setting for chunk size. You can use a different chunk size by providing a number of records per chunk:

bulk.create_query_job(object_name, contentType='CSV', pk_chunking=100000)

Additionally if you want to do something more sophisticated you can provide a header value:

bulk.create_query_job(object_name, contentType='CSV', pk_chunking='chunkSize=50000; startRow=00130000000xEftMGH')

Additionally if you want to set a http header yourself, you can pass a list of custom header values that will be added to the create job salesforce bulk api call:

bulk.create_query_job(object_name, contentType='CSV', pk_chunking='chunkSize=50000; startRow=00130000000xEftMGH', extra_headers={'Sforce-Disable-Batch-Retry':'TRUE'})

Bulk Insert, Update, Delete

All Bulk upload operations work the same. You set the operation when you create the job. Then you submit one or more documents that specify records with columns to insert/update/delete. When deleting you should only submit the Id for each record.

For efficiency you should use the post_batch method to post each batch of data. (Note that a batch can have a maximum 10,000 records and be 1GB in size.) You pass a generator or iterator into this function and it will stream data via POST to Salesforce. For help sending CSV formatted data you can use the salesforce_bulk.CsvDictsAdapter class. It takes an iterator returning dictionaries and returns an iterator which produces CSV data.

Full example:

from salesforce-bulk-2-7 import CsvDictsAdapter

job = bulk.create_insert_job("Account", contentType='CSV')
accounts = [dict(Name="Account%d" % idx) for idx in xrange(5)]
csv_iter = CsvDictsAdapter(iter(accounts))
batch = bulk.post_batch(job, csv_iter)
bulk.wait_for_batch(job, batch)
bulk.close_job(job)
print("Done. Accounts uploaded.")

Concurrency mode

When creating the job, pass concurrency='Serial' or concurrency='Parallel' to set the concurrency mode for the job.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

salesforce-bulk-2-7-2.2.8.tar.gz (12.0 kB view details)

Uploaded Source

Built Distribution

salesforce_bulk_2_7-2.2.8-py2.py3-none-any.whl (10.6 kB view details)

Uploaded Python 2 Python 3

File details

Details for the file salesforce-bulk-2-7-2.2.8.tar.gz.

File metadata

  • Download URL: salesforce-bulk-2-7-2.2.8.tar.gz
  • Upload date:
  • Size: 12.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.15.0 pkginfo/1.6.1 requests/2.25.1 setuptools/44.0.0.post20200106 requests-toolbelt/0.9.1 tqdm/4.55.1 CPython/2.7.18

File hashes

Hashes for salesforce-bulk-2-7-2.2.8.tar.gz
Algorithm Hash digest
SHA256 beb0f313e21c69bb993f2b7d7137b8a83e3fb8de55cf640a688726f2752727c0
MD5 9c8a54a4436497aed530557bb7e8a9e5
BLAKE2b-256 35cdcd00a0ba6e2d5e5ef7f0b888456019464d11e8aa1821dceccc2038fb0c8f

See more details on using hashes here.

File details

Details for the file salesforce_bulk_2_7-2.2.8-py2.py3-none-any.whl.

File metadata

  • Download URL: salesforce_bulk_2_7-2.2.8-py2.py3-none-any.whl
  • Upload date:
  • Size: 10.6 kB
  • Tags: Python 2, Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.15.0 pkginfo/1.6.1 requests/2.25.1 setuptools/44.0.0.post20200106 requests-toolbelt/0.9.1 tqdm/4.55.1 CPython/2.7.18

File hashes

Hashes for salesforce_bulk_2_7-2.2.8-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 52a2968db68c19863e7c473fe619a43aa5861324443b56d1ae672ef2ce2cc09e
MD5 a018ead7b85d332ed530b9e1a3109191
BLAKE2b-256 3ba83954fe4489be18a508a06c07c1f937b30564904b7e3918ddd4de10f35e4f

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page