Skip to main content

Python client library for the Crystal Computing AI API

Project description

Crystal Computing AI Python Library

The Crystal Computing AI Python library provides convenient access to the Crystal Computing AI API from applications written in the Python language. It includes a pre-defined set of classes for API resources that initialize themselves dynamically from API responses which makes it compatible with a wide range of versions of the Crystal Computing AI API.

You can find usage examples for the Crystal Computing AI Python library in our API reference and the Crystal Computing AI Cookbook.

Credit

This library is forked from the OpenAI Python Library which is forked from the Stripe Python Library.

Installation

To start, ensure you have Python 3.7.1 or newer. If you just want to use the package, run:

pip install --upgrade crystalai

After you have installed the package, import it at the top of a file:

import crystalai

To install this package from source to make modifications to it, run the following command from the root of the repository:

python setup.py install

Optional dependencies

Install dependencies for crystalai.embeddings_utils:

pip install crystalai[embeddings]

Install support for Weights & Biases which can be used for fine-tuning:

pip install crystalai[wandb]

Data libraries like numpy and pandas are not installed by default due to their size. They’re needed for some functionality of this library, but generally not for talking to the API. If you encounter a MissingDependencyError, install them with:

pip install crystalai[datalib]

Usage

The library needs to be configured with your Crystal Computing AI account's private API key which is available on our developer platform. Either set it as the CRYSTALAI_API_KEY environment variable before using the library:

export CRYSTALAI_API_KEY='crystal_...'

Or set crystalai.api_key to its value:

crystalai.api_key = "crystal_..."

Examples of how to use this library to accomplish various tasks can be found in the Crystal Computing AI Cookbook. It contains code examples for: classification using fine-tuning, clustering, code search, customizing embeddings, question answering from a corpus of documents. recommendations, visualization of embeddings, and more.

Most endpoints support a request_timeout param. This param takes a Union[float, Tuple[float, float]] and will raise an openai.error.Timeout error if the request exceeds that time in seconds (See: https://requests.readthedocs.io/en/latest/user/quickstart/#timeouts).

Chat completions

Chat models such as gpt-3.5-turbo and gpt-4 can be called using the chat completions endpoint.

completion = crystalai.ChatCompletion.create(model="gpt-3.5-turbo", messages=[{"role": "user", "content": "Hello world"}])
print(completion.choices[0].message.content)

You can learn more in our chat completions guide.

Completions

Text models such as babbage-002 or davinci-002 (and our legacy completions models) can be called using the completions endpoint.

completion = crystalai.completion.create(model="davinci-002", prompt="Hello world")
print(completion.choices[0].text)

You can learn more in our completions guide.

Embeddings

Embeddings are designed to measure the similarity or relevance between text strings. To get an embedding for a text string, you can use following:

text_string = "sample text"

model_id = "text-embedding-ada-002"

embedding = crystalai.Embedding.create(input=text_string, model=model_id)['data'][0]['embedding']

You can learn more in our embeddings guide.

Fine-tuning

Fine-tuning a model on training data can both improve the results (by giving the model more examples to learn from) and lower the cost/latency of API calls by reducing the need to include training examples in prompts.

# Create a fine-tuning job with an already uploaded file
crystalai.FineTuningJob.create(training_file="file-abc123", model="gpt-3.5-turbo")

# List 10 fine-tuning jobs
crystalai.FineTuningJob.list(limit=10)

# Retrieve the state of a fine-tune
crystalai.FineTuningJob.retrieve("ft-abc123")

# Cancel a job
crystalai.FineTuningJob.cancel("ft-abc123")

# List up to 10 events from a fine-tuning job
crystalai.FineTuningJob.list_events(id="ft-abc123", limit=10)

# Delete a fine-tuned model (must be an owner of the org the model was created in)
crystalai.Model.delete("ft:gpt-3.5-turbo:acemeco:suffix:abc123")

You can learn more in our fine-tuning guide.

To log the training results from fine-tuning to Weights & Biases use:

crystalai wandb sync

For more information, read the wandb documentation on Weights & Biases.

Moderation

Crystal Computing AI provides a free Moderation endpoint that can be used to check whether content complies with the Crystal Computing AI content policy.

moderation_resp = crystalai.Moderation.create(input="Here is some perfectly innocuous text that follows all Crystal Computing AI content policies.")

You can learn more in our moderation guide.

Async API

Async support is available in the API by prepending a to a network-bound method:

async def create_chat_completion():
    chat_completion_resp = await crystalai.ChatCompletion.acreate(model="gpt-3.5-turbo", messages=[{"role": "user", "content": "Hello world"}])

To make async requests more efficient, you can pass in your own aiohttp.ClientSession, but you must manually close the client session at the end of your program/event loop:

from aiohttp import ClientSession
crystalai.aiosession.set(ClientSession())

# At the end of your program, close the http session
await crystalai.aiosession.get().close()

Command-line interface

This library additionally provides an crystalai command-line utility which makes it easy to interact with the API from your terminal. Run crystalai api -h for usage.

# list models
crystalai api models.list

# create a chat completion (gpt-3.5-turbo, gpt-4, etc.)
crystalai api chat_completions.create -m gpt-3.5-turbo -g user "Hello world"

# create a completion (text-davinci-003, text-davinci-002, ada, babbage, curie, davinci, etc.)
crystalai api completions.create -m ada -p "Hello world"

# generate images via DALL·E API
crystalai api image.create -p "two dogs playing chess, cartoon" -n 1

# using crystalai through a proxy
crystalai --proxy=http://proxy.com api models.list

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

crystalai-0.28.0.tar.gz (60.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

crystalai-0.28.0-py3-none-any.whl (76.9 kB view details)

Uploaded Python 3

File details

Details for the file crystalai-0.28.0.tar.gz.

File metadata

  • Download URL: crystalai-0.28.0.tar.gz
  • Upload date:
  • Size: 60.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.5

File hashes

Hashes for crystalai-0.28.0.tar.gz
Algorithm Hash digest
SHA256 eea247b5f34c7b795ac31d371430ec04f252f5a3b77da2cc2cd18d9f2b933de8
MD5 3c8a42aaff198ea5e7f5e4bea9bbcf90
BLAKE2b-256 0d6844a20a4208518d7bceef6faf712b2e50a93c36aa3642ea2ba1c85bd3ea7b

See more details on using hashes here.

File details

Details for the file crystalai-0.28.0-py3-none-any.whl.

File metadata

  • Download URL: crystalai-0.28.0-py3-none-any.whl
  • Upload date:
  • Size: 76.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.5

File hashes

Hashes for crystalai-0.28.0-py3-none-any.whl
Algorithm Hash digest
SHA256 051c7bdd451beb6c6363e809584440ab5ee5804977354bd8fdc725826bbae8a2
MD5 1c1ffcad901d27a26676ac5267d31855
BLAKE2b-256 2a3d93f30cb792daef68ad5f0284fae1cdf25ab771de8dc875abacb492750567

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page