Skip to main content

DarcyAI Library

Project description

Darcy AI Engine

AI Engine Pipeline

Darcy AI Engine is a Python library that makes building AI apps as easy as building any other type of app. AI Engine exposes high-level constructs (InputStream, Perceptor, Callback, OutputStream) that you assemble in a Pipeline with a few lines of Python.

To get started, see the Build Guide, look at the examples, and consult the Python reference docs.

Example

This code (from basic.py) shows how easy it is to create a minimal AI Engine pipeline:

#Add the DarcyAI components that we need, particularly the InputStream, OutputStream, Pipeline, and PerceptorMock
from darcyai.tests.perceptor_mock import PerceptorMock
from darcyai.pipeline import Pipeline
from sample_input_stream import SampleInputStream
from sample_output_stream import SampleOutputStream

#Define a class to hold all of our operations
class SingleStreamDemo():
    def __init__(self):
        #Create an input stream and an output stream that we can use in our demo
        ping = SampleInputStream()
        output_stream = SampleOutputStream()

        #Give our class a pipeline property and instantiate it with a Darcy AI pipeline
        self.__pipeline = Pipeline(input_stream=ping,
                                   input_stream_error_handler_callback=self.__input_stream_error_handler_callback,
                                   universal_rest_api=True,
                                   rest_api_base_path="/pipeline",
                                   rest_api_host="0.0.0.0",
                                   rest_api_port=8080)

        #Add our output stream to the pipeline
        self.__pipeline.add_output_stream("output", self.__output_stream_callback, output_stream)

        #Create a mock perceptor and add it to the pipeline
        p1 = PerceptorMock()
        self.__pipeline.add_perceptor("p1", p1, accelerator_idx=0, input_callback=self.__perceptor_input_callback)

    #Define a "run" method that just calls "run" on the pipeline
    def run(self):
        self.__pipeline.run()

    #Define an input callback for the mock perceptor that just sends the data onward with no manipulation
    def __perceptor_input_callback(self, input_data, pom, config):
        return input_data

    #Define an output stream callback that does not manipulate the data
    def __output_stream_callback(self, pom, input_data):
        pass

    #Define an input stream error handler callback that just continues onward
    def __input_stream_error_handler_callback(self, exception):
        pass

#In the main thread, start the application by instantiating our demo class and calling "run"
if __name__ == "__main__":
    single_stream = SingleStreamDemo()
    single_stream.run()

Issues, Contributing, Discussion

If you discover issues with AI Engine, view the issues, or create a new one. You can also submit a Pull Request, or join the discussions.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

darcyai-2.2.0.dev20230113-py3-none-any.whl (13.9 MB view details)

Uploaded Python 3

File details

Details for the file darcyai-2.2.0.dev20230113-py3-none-any.whl.

File metadata

File hashes

Hashes for darcyai-2.2.0.dev20230113-py3-none-any.whl
Algorithm Hash digest
SHA256 2ac588235d30dc40c7458e714927d3c030133ea7c3d04e9942ff30ff045a6e23
MD5 3018a5cc7597093664a0c2caec9d7021
BLAKE2b-256 ab4c49df8fe61d69ef57cade9829e6638906a2f4ead22537b222b85c6ee9c75d

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page