Skip to main content

Network builder for bigml deepnet topologies

Project description

BigML Sense/Net

Sense/Net is a BigML interface to Tensorflow, which takes a network specification as a dictionary (read from BigML's JSON model format) and instantiates a TensorFlow compute graph based on that specification.

Entry Points

The library is meant, in general, to take a BigML model specification as a JSON document, and an optional map of settings and return a lightweight wrapper around a tf.keras.Model based on these arguments. The wrapper creation function can be found in sensenet.models.wrappers.create_model

Pretrained Networks

Often, BigML trained deepnets will use networks pretrained on ImageNet either as a starting point for fine tuning, or as the base layers under a custom set of readout layers. The weights for these networks are stored in a public s3 bucket and downloaded as needed for training or inference (see the sensenet.pretrained module). If the pretrained weights are never needed, no downloading occurs.

By default, these are downloaded to and read from the directory ~/.bigml_sensenet (which is created if it is not present). To change the location of this directory, clients can set the environment variable BIGML_SENSENET_CACHE_PATH.

Model Instantiation

To instantiate a model, pass the model specification and the dict of additional, optional settings to models.wrappers.create_model. For example:

model = wrappers.create_model(a_dict, settings={'image_path_prefix': 'images/path/'})

Again, a_dict is typically a downloaded BigML model, read into a python dictionary via json.load or similar. You may also pass the path to a file containing such a model:

model = wrappers.create_model('model.json', settings=None)

A similar function, models.wrappers.create_image_feature_extractor, allows clients to create a model object that returns instead the outputs of the final global pooling or flattening layer of the image model, given an image as input:

extractor = create_image_feature_extractor("resnet18", None)
extractor("path/to/image.jpeg").shape # (1, 512)

Note that this only works for networks with at least one image input, and does not work for bounding box models, as there is no global pooling or flattening step in those models.

For both create_image_feature_extractor and create model, settings can either be None (the default) or a dict of optional settings which may contain any of the settings arguments listed below.

Settings Arguments

These arguments can be passed to models.wrappers.create_image_feature_extractor or models.wrappers.create_model to change the input or output behavior of the model. Note that the settings specific to bounding box models are ignored if the model is not of the bounding box type.

  • bounding_box_threshold: For object detection models only, the minimal score that an object can have and still be surfaced to the user as part of the output. The default is 0.5, and lower the score will have the effect of more (possibly spurious) boxes identified in each input image.

  • color_space: A string which is one of ['rgb', 'rgba', 'bgr', 'bgra']. The first three letters give the order of the color channels (red, blue, and green) in the input tensors that will be passed to the model. The final presence or absence of an 'a' indicates that an alpha channel will be present (which will be ignored). This can be useful to match the color space of the output model to that provided by another library, such as open CV. Note that TensorFlow uses RGB ordering by default, and all files read by TensorFlow are automatically read as RGB files. This argument is generally only necessary if input_image_format is 'pixel_values', and will possibly break predictions if specified when the input is a file.

  • iou_threshold: A threshold indicating the amount of overlap boxes predicting the same class should have before they are considered to be bounding the same object. The default is 0.5, and lower values have the effect of eliminating boxes which would otherwise have been surfaced to the user.

  • max_objects: The maximum number of bounding boxes to return for each image in bounding box models. The default is 32.

  • rescale_type: A string which is one of ['warp', 'pad', 'crop']. If 'warp', input images are scaled to the input dimensions specified in the network, and their aspect ratios are not preserved. If 'pad', the image is resized to the smallest dimensions such that the image fits into the input dimensions of the network, then padded with constant pixels either below or to the right to create an appropriately sized image. For example, if the input dimensions of the network are 100 x 100, and we attempt to classify a 300 x 600 image, the image is first rescaled to 50 x 100 (preserving its aspect ratio) then padded on the right to create a 100 x 100 image. If 'crop', the image is resized to the smallest dimension such that the input dimensions fit in the image, then the image is centrally cropped to make the specified sizes. Using the sizes in previous example, the image would be rescaled to 100 x 200 (preserving its aspect ratio) then cropped by 50 pixels on the top and bottom to create a 100 x 100 image.

While these are not the only settings possible, these are the ones most likely to be useful to clients; other settings are typically only useful for very specific client applications.

Model Formats and Conversion

The canonical format for sensenet models is the JSON format downloadable from BigML. However, as the JSON is fairly heavyweight, time-consuming to parse, and not consumable from certain locations, SenseNet offers a conversion utility, sensenet.models.wrappers.convert, which takes the JSON format as input and can output the following formats:

  • tflite will export the model in the Tensorflow lite format, which allows lightweight prediction on mobile devices.

  • tfjs exports the model to the format read by Tensorflow JS to do predictions in the browser and server-side in node.js.

  • smbundle exports the model to a (proprietary) lightweight wrapper around the TensorFlow SavedModel format. The generated file is a concatenation of the files in the SavedModel directory, with some additional information written to the assets sub-directory. If this file is passed to create_model, the bundle is extracted to a temporary directory, the model instantiated, and the temporary files deleted. To extract the bundle without instantiating the model, see the functions in sensenet.models.bundle.

  • h5 exports the model weights only to the Keras h5 model format (i.e., via use of the TensorFlow function tf.keras.Model.save_weights) To use these, you'd instantiate the model from JSON and load the weights separately using the corresponding TensorFlow load_weights function.

Usage

Once instantiated, you can use the model to make predictions by using the returned model as a function, like so:

prediction = model([1.0, 2.0, 3.0])

The input point or points must be a list (or nested list) containing the input data for each point, in the order implied by model._preprocessors. Categorical and image variables should be passed as strings, where the image is either a path to the image on disk, or the raw compressed image bytes.

For classification or regression models, the function returns a numpy array where each row is the model's prediction for each input point. For classification models, there will be a probability for each class in each row. For regression models, each row will contain only a single entry.

For object detection models, the input should always be a single image (again, either as a file path, compressed byte string, or an array of pixel values, depending on the settings map, and the result will be list of detected boxes, each one represented as a dictionary. For example:

In [5]: model('pizza_people.jpg')
Out[5]:
[{'box': [16, 317, 283, 414], 'label': 'pizza', 'score': 0.9726969599723816},
 {'box': [323, 274, 414, 332], 'label': 'pizza', 'score': 0.7364346981048584},
 {'box': [158, 29, 400, 327], 'label': 'person', 'score': 0.6204285025596619},
 {'box': [15, 34, 283, 336], 'label': 'person', 'score': 0.5346986055374146},
 {'box': [311, 23, 416, 255], 'label': 'person', 'score': 0.41961848735809326}]

The box array contains the coordinates of the detected box, as x1, y1, x2, y2, where those coordinates represent the upper-left and lower-right corners of each bounding box, in a coordinate system with (0, 0) at the upper-left of the input image. The score is the rough probability that the object has been correctly identified, and the label is the detected class of the object.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

bigml-sensenet-0.6.3.tar.gz (101.2 kB view details)

Uploaded Source

Built Distributions

bigml_sensenet-0.6.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (528.1 kB view details)

Uploaded CPython 3.10 manylinux: glibc 2.17+ x86-64

bigml_sensenet-0.6.3-cp310-cp310-macosx_11_0_arm64.whl (117.2 kB view details)

Uploaded CPython 3.10 macOS 11.0+ ARM64

bigml_sensenet-0.6.3-cp310-cp310-macosx_10_9_x86_64.whl (117.8 kB view details)

Uploaded CPython 3.10 macOS 10.9+ x86-64

bigml_sensenet-0.6.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (528.1 kB view details)

Uploaded CPython 3.9 manylinux: glibc 2.17+ x86-64

bigml_sensenet-0.6.3-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (530.1 kB view details)

Uploaded CPython 3.9 manylinux: glibc 2.17+ ARM64

bigml_sensenet-0.6.3-cp39-cp39-macosx_11_0_arm64.whl (117.2 kB view details)

Uploaded CPython 3.9 macOS 11.0+ ARM64

bigml_sensenet-0.6.3-cp39-cp39-macosx_10_9_x86_64.whl (117.8 kB view details)

Uploaded CPython 3.9 macOS 10.9+ x86-64

bigml_sensenet-0.6.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (528.1 kB view details)

Uploaded CPython 3.8 manylinux: glibc 2.17+ x86-64

bigml_sensenet-0.6.3-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (530.1 kB view details)

Uploaded CPython 3.8 manylinux: glibc 2.17+ ARM64

bigml_sensenet-0.6.3-cp38-cp38-macosx_11_0_arm64.whl (117.2 kB view details)

Uploaded CPython 3.8 macOS 11.0+ ARM64

bigml_sensenet-0.6.3-cp38-cp38-macosx_10_9_x86_64.whl (117.8 kB view details)

Uploaded CPython 3.8 macOS 10.9+ x86-64

File details

Details for the file bigml-sensenet-0.6.3.tar.gz.

File metadata

  • Download URL: bigml-sensenet-0.6.3.tar.gz
  • Upload date:
  • Size: 101.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.9.13

File hashes

Hashes for bigml-sensenet-0.6.3.tar.gz
Algorithm Hash digest
SHA256 5459b314c08f3f9751528aeaf0ecf3a4ab2ece4b120d945c83b6314afd999bc7
MD5 da2f5c9b4c85e0f46a8b49f6302f3086
BLAKE2b-256 239354eba11226f955e72c6ffbcab894f73a36fd8a0516237ce663e63bac0836

See more details on using hashes here.

File details

Details for the file bigml_sensenet-0.6.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for bigml_sensenet-0.6.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 00f174470107b13f28660fcbf2c561f8dedc82b98c6ee3d2f4adef56f05f2453
MD5 607fc23527d1e3a581c8047da351d922
BLAKE2b-256 528fa68e73cd3a18c16615f6429e226cb6a7d5790175a8e0113db0db2a6f65c7

See more details on using hashes here.

File details

Details for the file bigml_sensenet-0.6.3-cp310-cp310-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for bigml_sensenet-0.6.3-cp310-cp310-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 3abfe00a24e1f3d8664da0864725282a79fd299f7ae473e1357c1c2ae0aacb68
MD5 f640b13ee7f57cb97c236aeab3c7f76b
BLAKE2b-256 9467d090d8c4c4197eeba93fff42116e17663d26bd16351f11b791bc4224cb8c

See more details on using hashes here.

File details

Details for the file bigml_sensenet-0.6.3-cp310-cp310-macosx_10_9_x86_64.whl.

File metadata

File hashes

Hashes for bigml_sensenet-0.6.3-cp310-cp310-macosx_10_9_x86_64.whl
Algorithm Hash digest
SHA256 32fb2cd2d0300ef853840abd2d1e28ecc4502e6e154c473c97c8b9a7586d5273
MD5 9724d318f590925ada841dc8ac82fa1b
BLAKE2b-256 e8dc523013061ee827edaebc9df6bb5193cd6daec6250419180df79fd918f4d8

See more details on using hashes here.

File details

Details for the file bigml_sensenet-0.6.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for bigml_sensenet-0.6.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 60f563f7105fdf91c72df8713d90b4a5e40d208f65e299e15177b91dafdd8586
MD5 a761e0577c7eadef59b0fcf3da6fb3c7
BLAKE2b-256 558f02532e5351e4795a906b5cb7466306b910931d5d2c95db79cb15b4a73de3

See more details on using hashes here.

File details

Details for the file bigml_sensenet-0.6.3-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.

File metadata

File hashes

Hashes for bigml_sensenet-0.6.3-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
Algorithm Hash digest
SHA256 1351cb6e296d989eac7e07e03b6e34490642c1480b2d913a64d19669db9ed117
MD5 e5668296d53f96bbeb9cc195e1d9e2d4
BLAKE2b-256 3aadfdea840ea653f199aabd2470589cdef99f33985b2f5a1c032be7f4f088e2

See more details on using hashes here.

File details

Details for the file bigml_sensenet-0.6.3-cp39-cp39-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for bigml_sensenet-0.6.3-cp39-cp39-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 3d0338be377ac9a5c075c0b952608e57b364be9b74b6dfe1f3549744c9ae57c5
MD5 ff94ce81acc15cbd7625e22294c814c7
BLAKE2b-256 e2b07cc4afeede71f312c725de82fccdd5c181a522626953a4ed8f2560fb9cc4

See more details on using hashes here.

File details

Details for the file bigml_sensenet-0.6.3-cp39-cp39-macosx_10_9_x86_64.whl.

File metadata

File hashes

Hashes for bigml_sensenet-0.6.3-cp39-cp39-macosx_10_9_x86_64.whl
Algorithm Hash digest
SHA256 abeb1f0e900aa1b35cda302e0d2a6a7d6e61d2eba8efa00fbace035891a8ce6f
MD5 165d9639d18682ce15b01103f9598cc1
BLAKE2b-256 38cf2902936f29196086343ac479d47e5bf2345633e776899dee790eb6f4bc6e

See more details on using hashes here.

File details

Details for the file bigml_sensenet-0.6.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for bigml_sensenet-0.6.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 9799e87123af1c3de5f59a439f7156f658e66ec63d01c66f67ac54e70ed19b0c
MD5 b689dd19cc268bc0c66f9a59f2551db0
BLAKE2b-256 8cad630866612ef01f118b09626282bc8fd87d3ff042316da58b829232acfcc4

See more details on using hashes here.

File details

Details for the file bigml_sensenet-0.6.3-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.

File metadata

File hashes

Hashes for bigml_sensenet-0.6.3-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
Algorithm Hash digest
SHA256 0393d9b0f676bb7e4372356a2d9b622125b47df0df6f9cf814bac6b01380f82b
MD5 99fac4605188de423e50b9955dbe459f
BLAKE2b-256 059e4b2ec120a87205c5531061997f44292631618c99ab243faf2fec08d51b2c

See more details on using hashes here.

File details

Details for the file bigml_sensenet-0.6.3-cp38-cp38-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for bigml_sensenet-0.6.3-cp38-cp38-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 b691f0ec5ad7541c89df58035296a2b1d160c1e4668db8801adf17c248f14b61
MD5 e038f59fc07879679a05c89a81a94fae
BLAKE2b-256 3b13dac941d4e2e0d3d4031279937be78fdbd7898d2333002c8a16a844fef00d

See more details on using hashes here.

File details

Details for the file bigml_sensenet-0.6.3-cp38-cp38-macosx_10_9_x86_64.whl.

File metadata

File hashes

Hashes for bigml_sensenet-0.6.3-cp38-cp38-macosx_10_9_x86_64.whl
Algorithm Hash digest
SHA256 d2559456983fa6a660a47d9f7cd314ee3a1514bf3123e2a236f27d5504920388
MD5 a34af6ad52c05bbfa68e72a8a765a291
BLAKE2b-256 67edf02d2a5a57c09d871681bec13f15ea414dc387279a34ff0134780b5602f8

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page