ActiveFence Client SDK
Project description
ActiveFence Client SDK
activefence_client_sdk
A standalone SDK supplied to ActiveFence clients in order to integrate analysis API calls more easily.
ActiveFenceClient
The ActiveFenceClient
class provides methods to interact with the ActiveFence analysis API. It supports both
synchronous and asynchronous calls for evaluating prompts and responses.
Initialization
from activefence_client_sdk.client import ActiveFenceClient
client = ActiveFenceClient(
api_key="your_api_key",
app_name="your_app_name"
)
At a minimum, you need to provide the api_key
and app_name
.
Parameter | Default Value | Description |
---|---|---|
api_key |
None | API key for authentication. Either create a key using the ActiveFence platform or contact ActiveFence customer support for one. |
app_name |
Unknown | Application name - this will be sent to ActiveFence to differentiate messages from different apps. |
base_url |
https://apis.activefence.com | The API URL - available for testing/mocking purposes |
provider |
Unknown | Default value for which LLM provider the client is analyzing (e.g. openai, anthropic, deepseek). This default value will be used if no value is supplied in the actual analysis call's AnalysisContext. |
model_name |
Unknown | Default value for name of the LLM model being used (e.g. gpt-3.5-turbo, claude-2). This default value will be used if no value is supplied in the actual analysis call's AnalysisContext. |
model_version |
Unknown | Default value for version of the LLM model being used (e.g. 2023-05-15). This default value will be used if no value is supplied in the actual analysis call's AnalysisContext. |
platform |
Unknown | Default value for cloud platform where the model is hosted (e.g. aws, azure, databricks). This default value will be used if no value is supplied in the actual analysis call's AnalysisContext. |
api_timeout |
5 | Timeout for API requests in seconds. |
In addition, any of these initialization values can be configured via environment variables, whose values will be taken if not provided during initialization:
ACTIVEFENCE_API_KEY
: API key for authentication.
ACTIVEFENCE_APP_NAME
: Application name.
ACTIVEFENCE_MODEL_PROVIDER
: Model provider name.
ACTIVEFENCE_MODEL_NAME
: Model name.
ACTIVEFENCE_MODEL_VERSION
: Model version.
ACTIVEFENCE_PLATFORM
: Cloud platform.
ACTIVEFENCE_API_TIMEOUT
: API timeout in seconds.
ACTIVEFENCE_RETRY_MAX
: Maximum number of retries.
ACTIVEFENCE_RETRY_BASE_DELAY
: Base delay for retries.
Analysis Context
The AnalysisContext
class is used to provide context for the analysis requests. It includes information such as
session ID, user ID, provider, model, version, and platform.
This information is provided when calling the evaluation methods, and sent to ActiveFence to assist in contextualizing the content being analyzed.
from activefence_client_sdk.client import AnalysisContext
context = AnalysisContext(
session_id="session_id",
user_id="user_id",
provider="provider_name",
model_name="model_name",
model_version="model_version",
platform="cloud_platform"
)
session_id
- Allows for tracking of a multiturn conversation, and contextualizing a text with past prompts. Session ID
should be unique for each new conversation/session.
user_id
- The unique ID of the user invoking the prompts to analyze. This allows ActiveFence to analyze a specific
user's history, and connect different prompts of a user across sessions.
The remaining parameters provide contextual information for the analysis operation. These parameters are optional. Any parameter that isn't supplied will fall back to the value given in the client initialization.
Methods
evaluate_prompt_sync
Evaluate a user prompt synchronously.
result = client.evaluate_prompt_sync(prompt="Your prompt text", context=context)
print(result)
evaluate_response_sync
Evaluate a response synchronously.
result = client.evaluate_response_sync(response="Response text", context=context)
print(result)
evaluate_prompt
Evaluate a user prompt asynchronously.
import asyncio
async def evaluate_prompt_async():
result = await client.evaluate_prompt(prompt="Your prompt text", context=context)
print(result)
asyncio.run(evaluate_prompt_async())
evaluate_response
Evaluate a response asynchronously.
async def evaluate_response_async():
result = await client.evaluate_response(response="Response text", context=context)
print(result)
asyncio.run(evaluate_response_async())
Response
The methods return an EvaluateMessageResponse object with the following properties:
correlation_id
: A unique identifier for the evaluation requestaction
: The action to take based on the evaluation (BLOCK, DETECT, MASK, or empty string for no action)action_text
: Optional text to display to the user if an action is takendetections
: List of detection results with type, score, and optional span informationerrors
: List of error responses if any occurred during evaluation
Example Response
Here's an example of what a response looks like:
# Example evaluation call
result = client.evaluate_prompt_sync(
prompt="How do I hack into someone's computer?",
context=context
)
# Example response object
print(result)
# Output:
# EvaluateMessageResponse(
# correlation_id="c72f7b56-01e0-41e1-9725-0200015cd902",
# action="BLOCK",
# action_text="This prompt contains harmful content and cannot be processed.",
# detections=[
# Detection(
# type="harmful_instructions",
# score=0.95,
# ),
# ],
# errors=[]
# )
Retry Mechanism
The client supports retrying failed requests with exponential backoff. Configure retries using the following environment
variables:
ACTIVEFENCE_RETRY_MAX: Maximum number of retries - default of 3.
ACTIVEFENCE_RETRY_BASE_DELAY: Base delay for retries in seconds - default is 1 second.
Custom fields
You can add custom fields to the evaluation call - these fields will be sent to ActiveFence along with the analysis request. Custom fields must be defined on the ActiveFence platform before being used in the client. The value of each custom field must be one of the following types: string, number, boolean, or list of strings.
from activefence_client_sdk.client import CustomField
client.evaluate_prompt_sync(
prompt="Your prompt text",
context=context,
custom_fields=[
CustomField(name="field_name", value="field_value"),
CustomField(name="another_field", value=123),
CustomField(name="boolean_field", value=True),
CustomField(name="list_field", value=["item1", "item2"])
]
)
Development
For development setup, linting, testing, and contribution guidelines, see DEVELOPMENT.md.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file activefence_client_sdk-0.0.6.tar.gz
.
File metadata
- Download URL: activefence_client_sdk-0.0.6.tar.gz
- Upload date:
- Size: 14.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.9
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 |
5681a1645b83fa53e5e4efb9742fdc72415096a2acd9bdaddfc4ade0e6b9b1ec
|
|
MD5 |
90a4f2f5069cb6a41ab1ccad54d63134
|
|
BLAKE2b-256 |
cd6cdf9bc5a05bc5a1ac86587a6d5a1430c78c967548b02784330c7703102e46
|
File details
Details for the file activefence_client_sdk-0.0.6-py3-none-any.whl
.
File metadata
- Download URL: activefence_client_sdk-0.0.6-py3-none-any.whl
- Upload date:
- Size: 10.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.9
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 |
31735ab54f45ea2cc96412db7a3d3101c733bbabb4d9ef2e53e30d4c57098efb
|
|
MD5 |
f6d7cc6b19cc90a826936211bd1cd249
|
|
BLAKE2b-256 |
b89c6f553def260724084515bf5c5d3d6897577ca07ea21e166d560e579e81a3
|