Deep Learning platform and Training Data via API.
Project description
Quick start:
Install
pip install diffgram
On linux
pip3 install diffgram
- Get credentials from Diffgram.com
- Download sample files from github
- Config credentials
DOCS
Example
from diffgram import Project
project = Project(project_string_id = "replace_with_project_string",
client_id = "replace_with_client_id",
client_secret = "replace_with_client_secret" )
file = project.file.from_local(path)
Beta
Note the API/SDK is in beta and is undergoing rapid improvment. There may be breaking changes.
Please see the API docs for the latest canonical reference
and be sure to upgrade to latest ie: pip install diffgram --upgrade
. We will attempt to keep the SDK up to date with the API.
Help articles for Diffgram.com See below for some examples.
Requires Python >=3.5
The default install through pip will install dependencies
for local prediction (tensorflow opencv) as listed in requirements.txt
.
The only requirement needed for majority of functions is requests
.
If you are looking for a minimal size install and already have requests use
the --no-dependencies
flag ie pip install diffgram --no-dependencies
Overall flow
The primary flow of using Diffgram is a cycle of importing data, training models, and updating those models, primarily by changing the data. Making use of the deep learning and collecting feedback to channel back to Diffgram is handled in your system. More on this here.
Tutorials and walk throughs
Red Pepper Chef - from new training data to deployed system in a few lines of code
How to validate your model
Fast Annotation Net
Code samples
The project object
from diffgram import Project
project = Project(project_string_id = "replace_with_project_string",
client_id = "replace_with_client_id",
client_secret = "replace_with_client_secret" )
The project
represents the primary starting point.
The following examples assumes you have a project defined like this.
Import data
Importing a single local file:
file = project.file.from_local(path)
Importing from URL (ie cloud provider)
result = project.file.from_url(url)
(See our help article for signed URLS](https://intercom.help/diffgram/getting-started/uploading-media)
Importing existing instances
instance_bravo = {
'type': 'box',
'name': 'cat',
'x_max': 128,
'x_min': 1,
'y_min': 1,
'y_max': 128
}
# Combine into image packet
image_packet = {'instance_list' : [instance_alpha, instance_bravo],
'media' : {
'url' : "https://www.readersdigest.ca/wp-content/uploads/sites/14/2011/01/4-ways-cheer-up-depressed-cat.jpg",
'type' : 'image'
}
}
result = project.file.from_packet(image_packet)
Actions and Brains (Beta)
Brain
Benefits of using prediction through Diffgram brain
- Clean abstraction for different deep learning methods, local vs online prediction, and file types
- Designed for changing models and data. The same object you call .train() on can also call .predict()
- Ground up support for many models. See local_cam for one example.
And of course local prediction - your model is your model.
Note: We plan to support many deep learning methods in the future, so while this is fairly heavily focused on object detection, the vast majority of concepts carry over to semantic segmentation and other methods.
Train
brain = project.train.start(method="object_detection",
name="my_model")
brain.check_status()
Predict Online
Predicting online requires no advanced setup and uses less local compute resources.
For predicting online there are 3 ways to send files
Local file path
inference = brain.predict_from_local(path)
URL, ie a remote cloud server
inference = brain.predict_from_url(url)
From a diffgram file
inference = brain.predict_from_file(file_id = 111546)
Predict Local
Predicting locally downloads the model weights, graph defintion, and relevant labels. It will setup the model - warning this may use a significant amount of your local compute resources. By default the model downloads to a temp directory, although you are welcome to download and save the model to disk.
Local prediction, with local file
Same as before, except we set the local
flag to True
brain = project.get_model(
name = None,
local = True)
Then we can call
inference = brain.predict_from_local(path)
Local prediction, two models with visual
Get two models:
page_brain = project.get_model(
name = "page_example_name",
local = True)
graphs_brain = project.get_model(
name = "graph_example_name",
local = True)
This opens an image from a local path and runs both brains on same image. We are only reading the image once, so you can stack as many networks as you need here.
image = open(path, "rb")
image = image.read()
page_inference = page_brain.run(image)
graphs_inference = graphs_brain.run(image)
Optional, render a visual
output_image = page_brain.visual(image_backup)
output_image = graphs_brain.visual(output_image)
Imagine the "page" brain, most pages look the same, so it will need less data and less retraining to reach an acceptable level of performance. Then you can have a seperate network that gets retrained often to detect items of interest on the page (ie graphs).
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for diffgram-0.1.7.2-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | d6cc7b5908492fe945f1a9dd2b759abc2248dcdc8fec79a156eb9a4a915e1454 |
|
MD5 | 3b3c0c1f4f4b26f5a10b95aa77cae93b |
|
BLAKE2b-256 | 9de0f1c37909f970da7c3f5119c5c854541ec2cc50e3d765c2dd1794a95bf3de |