Python client library for the Crystal Computing AI API
Project description
Crystal Computing AI Python Library
The Crystal Computing AI Python library provides convenient access to the Crystal Computing AI API from applications written in the Python language. It includes a pre-defined set of classes for API resources that initialize themselves dynamically from API responses which makes it compatible with a wide range of versions of the Crystal Computing AI API.
You can find usage examples for the Crystal Computing AI Python library in our API reference and the Crystal Computing AI Cookbook.
Credit
This library is forked from the OpenAI Python Library which is forked from the Stripe Python Library.
Installation
To start, ensure you have Python 3.7.1 or newer. If you just want to use the package, run:
pip install --upgrade TODO_PACKAGE_NAME
After you have installed the package, import it at the top of a file:
import TODO_PACKAGE_NAME
To install this package from source to make modifications to it, run the following command from the root of the repository:
python setup.py install
Optional dependencies
Install dependencies for TODO_PACKAGE_NAME.embeddings_utils
:
pip install TODO_PACKAGE_NAME[embeddings]
Install support for Weights & Biases which can be used for fine-tuning:
pip install TODO_PACKAGE_NAME[wandb]
Data libraries like numpy
and pandas
are not installed by default due to their size. They’re needed for some functionality of this library, but generally not for talking to the API. If you encounter a MissingDependencyError
, install them with:
pip install TODO_PACKAGE_NAME[datalib]
Usage
The library needs to be configured with your Crystal Computing AI account's private API key which is available on our developer platform. Either set it as the CRYSTALAI_API_KEY
environment variable before using the library:
export CRYSTALAI_API_KEY='crystal_...'
Or set TODO_PACKAGE_NAME.api_key
to its value:
TODO_PACKAGE_NAME.api_key = "crystal_..."
Examples of how to use this library to accomplish various tasks can be found in the Crystal Computing AI Cookbook. It contains code examples for: classification using fine-tuning, clustering, code search, customizing embeddings, question answering from a corpus of documents. recommendations, visualization of embeddings, and more.
Most endpoints support a request_timeout
param. This param takes a Union[float, Tuple[float, float]]
and will raise an openai.error.Timeout
error if the request exceeds that time in seconds (See: https://requests.readthedocs.io/en/latest/user/quickstart/#timeouts).
Chat completions
Chat models such as gpt-3.5-turbo
and gpt-4
can be called using the chat completions endpoint.
completion = TODO_PACKAGE_NAME.ChatCompletion.create(model="gpt-3.5-turbo", messages=[{"role": "user", "content": "Hello world"}])
print(completion.choices[0].message.content)
You can learn more in our chat completions guide.
Completions
Text models such as babbage-002
or davinci-002
(and our legacy completions models) can be called using the completions endpoint.
completion = TODO_PACKAGE_NAME.completion.create(model="davinci-002", prompt="Hello world")
print(completion.choices[0].text)
You can learn more in our completions guide.
Embeddings
Embeddings are designed to measure the similarity or relevance between text strings. To get an embedding for a text string, you can use following:
text_string = "sample text"
model_id = "text-embedding-ada-002"
embedding = TODO_PACKAGE_NAME.Embedding.create(input=text_string, model=model_id)['data'][0]['embedding']
You can learn more in our embeddings guide.
Fine-tuning
Fine-tuning a model on training data can both improve the results (by giving the model more examples to learn from) and lower the cost/latency of API calls by reducing the need to include training examples in prompts.
# Create a fine-tuning job with an already uploaded file
TODO_PACKAGE_NAME.FineTuningJob.create(training_file="file-abc123", model="gpt-3.5-turbo")
# List 10 fine-tuning jobs
TODO_PACKAGE_NAME.FineTuningJob.list(limit=10)
# Retrieve the state of a fine-tune
TODO_PACKAGE_NAME.FineTuningJob.retrieve("ft-abc123")
# Cancel a job
TODO_PACKAGE_NAME.FineTuningJob.cancel("ft-abc123")
# List up to 10 events from a fine-tuning job
TODO_PACKAGE_NAME.FineTuningJob.list_events(id="ft-abc123", limit=10)
# Delete a fine-tuned model (must be an owner of the org the model was created in)
TODO_PACKAGE_NAME.Model.delete("ft:gpt-3.5-turbo:acemeco:suffix:abc123")
You can learn more in our fine-tuning guide.
To log the training results from fine-tuning to Weights & Biases use:
TODO_PACKAGE_NAME wandb sync
For more information, read the wandb documentation on Weights & Biases.
Moderation
Crystal Computing AI provides a free Moderation endpoint that can be used to check whether content complies with the Crystal Computing AI content policy.
moderation_resp = TODO_PACKAGE_NAME.Moderation.create(input="Here is some perfectly innocuous text that follows all Crystal Computing AI content policies.")
You can learn more in our moderation guide.
Async API
Async support is available in the API by prepending a
to a network-bound method:
async def create_chat_completion():
chat_completion_resp = await TODO_PACKAGE_NAME.ChatCompletion.acreate(model="gpt-3.5-turbo", messages=[{"role": "user", "content": "Hello world"}])
To make async requests more efficient, you can pass in your own
aiohttp.ClientSession
, but you must manually close the client session at the end
of your program/event loop:
from aiohttp import ClientSession
TODO_PACKAGE_NAME.aiosession.set(ClientSession())
# At the end of your program, close the http session
await TODO_PACKAGE_NAME.aiosession.get().close()
Command-line interface
This library additionally provides an openai
command-line utility
which makes it easy to interact with the API from your terminal. Run
openai api -h
for usage.
# list models
TODO_PACKAGE_NAME api models.list
# create a chat completion (gpt-3.5-turbo, gpt-4, etc.)
TODO_PACKAGE_NAME api chat_completions.create -m gpt-3.5-turbo -g user "Hello world"
# create a completion (text-davinci-003, text-davinci-002, ada, babbage, curie, davinci, etc.)
TODO_PACKAGE_NAME api completions.create -m ada -p "Hello world"
# generate images via DALL·E API
TODO_PACKAGE_NAME api image.create -p "two dogs playing chess, cartoon" -n 1
# using TODO_PACKAGE_NAME through a proxy
TODO_PACKAGE_NAME --proxy=http://proxy.com api models.list
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for crystalai-0.27.9-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 592c34a42fa784785cf8c80e255cc36ee37b3e6722089309dfe5fd8b3010d83d |
|
MD5 | 5534dfd9872136fdd1726bfc06053eeb |
|
BLAKE2b-256 | 0bed42ec50cb63d19543af12d3a8c5fdedaf8f516d51148cc4ae02d19dba96c8 |