Python wrapper for Oracle Field Service API
Project description
OFSC
A simple Python wrapper for Oracle OFS REST API
Async Client
Starting with version 2.19, pyOFSC includes an async client (AsyncOFSC) that provides asynchronous API access using httpx and Python's async/await patterns.
Implementation Status: The async client is being implemented progressively. Currently available async methods are marked with [Sync & Async] tags throughout this documentation.
Usage Example
from ofsc.async_client import AsyncOFSC
async with AsyncOFSC(
clientID="your_client_id",
secret="your_secret",
companyName="your_company"
) as client:
# Get workzones asynchronously
workzones = await client.metadata.get_workzones(offset=0, limit=100)
# Get specific workzone
workzone = await client.metadata.get_workzone("ATLANTA")
# Create a new workzone
from ofsc.models import Workzone
new_zone = Workzone(
workZoneLabel="NEW_ZONE",
workZoneName="New Zone",
status="active",
travelArea="enterprise"
)
result = await client.metadata.create_workzone(new_zone)
Key Features
- Async/Await Support: Full async/await pattern support for non-blocking I/O
- Same Models: Reuses all existing Pydantic models from the sync version
- Context Manager: Must be used as an async context manager to properly manage HTTP client lifecycle
- Simplified API: Async methods always return Pydantic models (no
response_typeparameter)
Currently Implemented Async Methods
- Metadata / Workzones:
get_workzones,get_workzone,create_workzone,replace_workzone
More async methods will be added progressively. Check the [Sync & Async] tags in the function listings below to see which methods support async.
Models
Starting with OFS 1.17 we added models for the most common entities and metadata. All models should be imported from ofsc.models. All existing create functions will be eventually transitioned to models.
The models are based on the Pydantic BaseModel, so it is possible to build an entity using the model_validate static methods.
Core Models
- Activity: Main activity entity with all properties
- Resource: Resource entity (users, technicians, etc.)
- ResourceType: Resource type definitions
- Location: Geographic locations and resource locations
- AssignedLocation: Location assignments for resources
- BaseUser: User entity for resource management
Metadata Models
- ActivityTypeGroup: Activity type group definitions
- ActivityType: Activity type definitions with colors, features, and time slots
- CapacityArea: Capacity area definitions with parent relationships
- CapacityCategory: Capacity category definitions
- InventoryType: Inventory type definitions
- Property: Property definitions with validation and enumeration support
- EnumerationValue: Enumeration values for properties
- RoutingProfile: Routing profile definitions (groups of routing plans)
- RoutingPlan: Routing plan definitions
- RoutingPlanData: Complete routing plan export with configuration
- RoutingPlanConfig: Detailed routing plan configuration with optimization parameters
- RoutingActivityGroup: Activity group configuration within routing plan
- RoutingProviderGroup: Provider group settings within activity group
- Workskill: Work skill definitions
- WorkSkillCondition: Work skill condition definitions
- WorkSkillGroup: Work skill group definitions
- Workzone: Work zone definitions with keys, shapes, and organization
- WorkzoneListResponse: Paginated response for workzone lists
Organization & Application Models
- Application: Application definitions with resource access
- Organization: Organization entity definitions
Bulk Operations Models
- BulkUpdateRequest: Request model for bulk activity updates
- BulkUpdateResponse: Response model with results, errors, and warnings
- BulkUpdateActivityItem: Individual activity item for bulk operations
Schedule & Calendar Models
- ResourceWorkScheduleItem: Work schedule definitions for resources
- CalendarView: Calendar view with shifts and time slots
- CalendarViewItem: Individual calendar items with recurrence support
- Recurrence: Recurrence pattern definitions
Daily Extract Models
- DailyExtractFolders: Available extract date folders
- DailyExtractFiles: Available files for a specific date
- DailyExtractItem: Individual extract file information
Capacity Models
- CapacityRequest: Request model for capacity queries with CsvList support for string arrays (areas, dates, categories)
- GetCapacityResponse: Response model for capacity data
- GetQuotaRequest: Request model for quota queries with automatic CsvList conversion for string arrays
- GetQuotaResponse: Response model for quota data
- CapacityResponseItem: Individual capacity response item by date
- CapacityAreaResponseItem: Capacity area response with metrics and categories
- CapacityMetrics: Capacity metrics with count and optional minutes arrays
- CapacityCategoryItem: Capacity category items with calendar and available metrics
- QuotaAreaItem: Quota area response with quota-specific fields (maxAvailable, used, bookedActivities, etc.)
Configuration & Utility Models
- OFSConfig: Main configuration model for API connection
- OFSResponseList: Generic paginated response wrapper
- CsvList: Auxiliary model for comma-separated string lists with conversion methods
- Translation: Multi-language translation support
- OFSAPIError: Standardized API error responses
Functions implemented
Core / Activities
get_activities(self, params, response_type=OBJ_RESPONSE)
get_activity(self, activity_id, response_type=OBJ_RESPONSE)
update_activity(self, activity_id, data, response_type=OBJ_RESPONSE)
move_activity(self, activity_id, data, response_type=OBJ_RESPONSE)
search_activities(self, params, response_type=OBJ_RESPONSE)
bulk_update(self, data: BulkUpdateRequest, response_type=OBJ_RESPONSE)
get_file_property(self, activityId, label, mediaType="application/octet-stream", response_type=OBJ_RESPONSE)
get_all_activities(self, root=None, date_from=date.today()-timedelta(days=7), date_to=date.today()+timedelta(days=7), activity_fields=["activityId", "activityType", "date", "resourceId", "status"], additional_fields=None, initial_offset=0, include_non_scheduled=False, limit=5000)
Core / Events
get_subscriptions(self, response_type=OBJ_RESPONSE)
create_subscription(self, data, response_type=OBJ_RESPONSE)
delete_subscription(self, subscription_id, response_type=OBJ_RESPONSE)
get_subscription_details(self, subscription_id, response_type=OBJ_RESPONSE)
get_events(self, params, response_type=OBJ_RESPONSE)
Core / Resources
get_resource(self, resource_id, inventories=False, workSkills=False, workZones=False, workSchedules=False, response_type=OBJ_RESPONSE)
get_resources(self, fields=None, offset=0, limit=100, canBeTeamHolder=None, canParticipateInTeam=None, inventories=False, workSkills=False, workZones=False, workSchedules=False, response_type=OBJ_RESPONSE)
create_resource(self, resourceId, data, response_type=OBJ_RESPONSE)
create_resource_from_obj(self, resourceId, data, response_type=OBJ_RESPONSE)
update_resource(self, resourceId, data: dict, identify_by_internal_id: bool = False, response_type=OBJ_RESPONSE)
get_position_history(self, resource_id, date, response_type=OBJ_RESPONSE)
get_resource_route(self, resource_id, date, activityFields=None, offset=0, limit=100, response_type=OBJ_RESPONSE)
get_resource_descendants(self, resource_id, resourceFields=None, offset=0, limit=100, inventories=False, workSkills=False, workZones=False, workSchedules=False, response_type=OBJ_RESPONSE)
get_resource_users(self, resource_id, response_type=OBJ_RESPONSE)
set_resource_users(self, resource_id, users: tuple[str], response_type=OBJ_RESPONSE)
delete_resource_users(self, resource_id, response_type=OBJ_RESPONSE)
get_resource_workschedules(self, resource_id, actualDate: date, response_type=OBJ_RESPONSE)
set_resource_workschedules(self, resource_id, data: ResourceWorkScheduleItem, response_type=OBJ_RESPONSE)
get_resource_calendar(self, resource_id: str, dateFrom: date, dateTo: date, response_type=OBJ_RESPONSE)
get_resource_inventories(self, resource_id, response_type=OBJ_RESPONSE)
get_resource_assigned_locations(self, resource_id, response_type=OBJ_RESPONSE)
get_resource_workzones(self, resource_id, response_type=OBJ_RESPONSE)
get_resource_workskills(self, resource_id, response_type=OBJ_RESPONSE)
bulk_update_resource_workzones(self, data, response_type=OBJ_RESPONSE)
bulk_update_resource_workskills(self, data, response_type=OBJ_RESPONSE)
bulk_update_resource_workschedules(self, data, response_type=OBJ_RESPONSE)
get_resource_locations(self, resource_id, response_type=OBJ_RESPONSE)
create_resource_location(self, resource_id, location: Location, response_type=OBJ_RESPONSE)
delete_resource_location(self, resource_id, location_id, response_type=OBJ_RESPONSE)
get_assigned_locations(self, resource_id, dateFrom: date = date.today(), dateTo: date = date.today(), response_type=OBJ_RESPONSE)
set_assigned_locations(self, resource_id: str, data: AssignedLocationsResponse, response_type=OBJ_RESPONSE)
Core / Users
get_users(self, offset=0, limit=100, response_type=OBJ_RESPONSE)
get_user(self, login, response_type=OBJ_RESPONSE)
update_user(self, login, data, response_type=OBJ_RESPONSE)
create_user(self, login, data, response_type=OBJ_RESPONSE)
delete_user(self, login, response_type=OBJ_RESPONSE)
Core / Daily Extract
get_daily_extract_dates(self, response_type=OBJ_RESPONSE)
get_daily_extract_files(self, date, response_type=OBJ_RESPONSE)
get_daily_extract_file(self, date, filename, response_type=FILE_RESPONSE)
Core / Helper Functions
get_all_properties(self, initial_offset=0, limit=100)
Metadata / Activity Type Groups
get_activity_type_groups (self, expand="parent", offset=0, limit=100, response_type=OBJ_RESPONSE)
get_activity_type_group (self,label, response_type=OBJ_RESPONSE)
Metadata / Activity Types
get_activity_types(self, offset=0, limit=100, response_type=OBJ_RESPONSE)
get_activity_type (self, label, response_type=OBJ_RESPONSE)
Metadata / Capacity
get_capacity_areas(self, expandParent: bool = False, fields: list[str] = ["label"], activeOnly: bool = False, areasOnly: bool = False, response_type=OBJ_RESPONSE)
get_capacity_area(self, label: str, response_type=OBJ_RESPONSE)
get_capacity_categories(self, offset=0, limit=100, response_type=OBJ_RESPONSE)
get_capacity_category(self, label: str, response_type=OBJ_RESPONSE)
Metadata / Inventory
get_inventory_types(self, response_type=OBJ_RESPONSE)
get_inventory_type(self, label: str, response_type=OBJ_RESPONSE)
Metadata / Properties
get_properties(self, offset=0, limit=100, response_type=OBJ_RESPONSE)
get_property(self, label: str, response_type=OBJ_RESPONSE)
create_or_replace_property(self, property: Property, response_type=OBJ_RESPONSE)
get_enumeration_values(self, label: str, offset=0, limit=100, response_type=OBJ_RESPONSE)
create_or_update_enumeration_value(self, label: str, value: Tuple[EnumerationValue, ...], response_type=OBJ_RESPONSE)
Metadata / Workskills
get_workskills (self, offset=0, limit=100, response_type=OBJ_RESPONSE)
get_workskill(self, label: str, response_type=OBJ_RESPONSE)
create_or_update_workskill(self, skill: Workskill, response_type=OBJ_RESPONSE)
delete_workskill(self, label: str, response_type=OBJ_RESPONSE)
get_workskill_conditions(self, response_type=OBJ_RESPONSE)
replace_workskill_conditions(self, data: WorskillConditionList, response_type=OBJ_RESPONSE)
get_workskill_groups(self, response_type=OBJ_RESPONSE)
get_workskill_group(self, label: str, response_type=OBJ_RESPONSE)
create_or_update_workskill_group(self, group: WorkSkillGroup, response_type=OBJ_RESPONSE)
delete_workskill_group(self, label: str, response_type=OBJ_RESPONSE)
Metadata / Plugins
import_plugin(self, plugin: str)
import_plugin_file(self, plugin: Path)
Metadata / Resource Types
get_resource_types(self, response_type=OBJ_RESPONSE)
Metadata / Workzones [Sync & Async]
get_workzones(self, offset=0, limit=100, response_type=OBJ_RESPONSE)
get_workzone(self, label: str, response_type=OBJ_RESPONSE)
create_workzone(self, workzone: Workzone, response_type=OBJ_RESPONSE) # Async only
replace_workzone(self, workzone: Workzone, auto_resolve_conflicts: bool = False, response_type=OBJ_RESPONSE)
Metadata / Routing Profiles
get_routing_profiles(self, offset=0, limit=100, response_type=OBJ_RESPONSE)
get_routing_profile_plans(self, profile_label: str, offset=0, limit=100, response_type=OBJ_RESPONSE)
export_routing_plan(self, profile_label: str, plan_label: str, response_type=OBJ_RESPONSE)
export_plan_file(self, profile_label: str, plan_label: str) -> bytes
import_routing_plan(self, profile_label: str, plan_data: bytes, response_type=OBJ_RESPONSE)
force_import_routing_plan(self, profile_label: str, plan_data: bytes, response_type=OBJ_RESPONSE)
start_routing_plan(self, profile_label: str, plan_label: str, resource_external_id: str, date: str, response_type=OBJ_RESPONSE)
Metadata / Applications
get_applications(self, response_type=OBJ_RESPONSE)
get_application(self, label: str, response_type=OBJ_RESPONSE)
get_application_api_accesses(self, label: str, response_type=OBJ_RESPONSE)
get_application_api_access(self, label: str, accessId: str, response_type=OBJ_RESPONSE)
Metadata / Organizations
get_organizations(self, response_type=OBJ_RESPONSE)
get_organization(self, label: str, response_type=OBJ_RESPONSE)
Capacity / Available Capacity
getAvailableCapacity(self, dates, areas, categories=None, aggregateResults=None, availableTimeIntervals="all", calendarTimeIntervals="all", fields=None, response_type=OBJ_RESPONSE)
getQuota(self, dates, areas=None, categories=None, aggregateResults=None, categoryLevel=None, intervalLevel=None, returnStatuses=None, timeSlotLevel=None, response_type=OBJ_RESPONSE)
Usage Examples
Capacity API
from ofsc import OFSC
from ofsc.models import CsvList
# Initialize connection
ofsc_instance = OFSC(
clientID="your_client_id",
secret="your_secret",
companyName="your_company"
)
# Get capacity data with individual parameters
response = ofsc_instance.capacity.getAvailableCapacity(
dates=["2025-06-25", "2025-06-26"], # Required
areas=["Atlantic", "Pacific"], # Required
availableTimeIntervals="all", # Optional
calendarTimeIntervals="all" # Optional
)
# Access response data
for item in response.items:
print(f"Date: {item.date}")
for area in item.areas:
print(f" Area: {area.label}")
print(f" Calendar count: {area.calendar.count}")
if area.available:
print(f" Available count: {area.available.count}")
# Alternative input formats also work:
# CSV string format
response = ofsc_instance.capacity.getAvailableCapacity(
dates="2025-06-25,2025-06-26",
areas="Atlantic,Pacific",
categories="Install,Repair"
)
# CsvList format
response = ofsc_instance.capacity.getAvailableCapacity(
dates=CsvList.from_list(["2025-06-25"]),
areas=CsvList.from_list(["Atlantic"]),
aggregateResults=True
)
Quota API with CsvList
from ofsc.models import GetQuotaRequest, CsvList
# Create quota request with list[str] (automatically converted to CsvList)
quota_request = GetQuotaRequest(
aggregateResults=True,
areas=["Atlantic", "Pacific"], # list[str] - auto-converted
categories=["Install", "Repair"], # list[str] - auto-converted
categoryLevel=True,
dates=["2025-06-25", "2025-06-26"], # list[str] - auto-converted
intervalLevel=False,
returnStatuses=True,
timeSlotLevel=False
)
# Or with CsvList directly
quota_request2 = GetQuotaRequest(
aggregateResults=False,
areas=CsvList.from_list(["Europe", "Asia"]), # CsvList input
categories="Service,Support", # CSV string - auto-converted
categoryLevel=False,
dates=["2025-06-27", "2025-06-28"],
intervalLevel=True,
returnStatuses=False,
timeSlotLevel=True
)
# Access as lists
areas_list = quota_request.get_areas_list() # ["Atlantic", "Pacific"]
categories_list = quota_request.get_categories_list() # ["Install", "Repair"]
Quota API Function
from ofsc import OFSC
# Initialize connection
ofsc_instance = OFSC(
clientID="your_client_id",
secret="your_secret",
companyName="your_company"
)
# Simple quota request with individual parameters
quota_response = ofsc_instance.capacity.getQuota(
dates=["2025-06-25", "2025-06-26"], # Required
areas=["Atlantic", "Pacific"], # Optional
aggregateResults=True, # Optional
categoryLevel=False # Optional
)
# Minimal quota request (only required dates)
minimal_quota = ofsc_instance.capacity.getQuota(
dates=["2025-06-27"]
# All other parameters default to None
)
# Mixed input types
mixed_quota = ofsc_instance.capacity.getQuota(
dates="2025-06-28,2025-06-29", # CSV string
areas=["Europe", "Asia"], # List
categories="Install,Repair", # CSV string
returnStatuses=True
)
Routing Profiles API
from ofsc import OFSC
from ofsc.common import FULL_RESPONSE
# Initialize connection
ofsc_instance = OFSC(
clientID="your_client_id",
secret="your_secret",
companyName="your_company"
)
# Get all routing profiles
profiles = ofsc_instance.metadata.get_routing_profiles()
for profile in profiles.items:
print(f"Profile: {profile.profileLabel}")
# Get plans for a specific profile
plans = ofsc_instance.metadata.get_routing_profile_plans(
profile_label="MaintenanceRoutingProfile"
)
for plan in plans.items:
print(f"Plan: {plan.planLabel}")
# Export a routing plan (returns parsed JSON)
plan_data = ofsc_instance.metadata.export_routing_plan(
profile_label="MaintenanceRoutingProfile",
plan_label="Optimization"
)
# Export a plan as raw bytes (ready for import)
plan_bytes = ofsc_instance.metadata.export_plan_file(
profile_label="MaintenanceRoutingProfile",
plan_label="Optimization"
)
# plan_bytes contains raw data that can be imported
# Import a routing plan (409 if plan already exists)
response = ofsc_instance.metadata.import_routing_plan(
profile_label="TargetProfile",
plan_data=plan_bytes,
response_type=FULL_RESPONSE
)
if response.status_code == 409:
print("Plan already exists, use force_import to overwrite")
# Force import (overwrite existing plan)
response = ofsc_instance.metadata.force_import_routing_plan(
profile_label="MaintenanceRoutingProfile",
plan_data=plan_bytes,
response_type=FULL_RESPONSE
)
print(f"Import status: {response.status_code}")
# Start a routing plan for a specific resource
response = ofsc_instance.metadata.start_routing_plan(
profile_label="MaintenanceRoutingProfile",
plan_label="Optimization",
resource_external_id="TECH_001",
date="2025-10-25",
response_type=FULL_RESPONSE
)
print(f"Start status: {response.status_code}")
# Complete workflow: Backup and restore a routing plan
# 1. Export the plan
backup_data = ofsc_instance.metadata.export_plan_file(
profile_label="MaintenanceRoutingProfile",
plan_label="Optimization"
)
# 2. Save to file (optional)
with open("backup_optimization.dat", "wb") as f:
f.write(backup_data)
# 3. Later, restore from backup
with open("backup_optimization.dat", "rb") as f:
restore_data = f.read()
response = ofsc_instance.metadata.force_import_routing_plan(
profile_label="MaintenanceRoutingProfile",
plan_data=restore_data,
response_type=FULL_RESPONSE
)
print(f"Restore completed: {response.status_code}")
Workzones API
from ofsc import OFSC
from ofsc.models import Workzone
# Initialize connection
ofsc_instance = OFSC(
clientID="your_client_id",
secret="your_secret",
companyName="your_company"
)
# Get all workzones (returns WorkzoneListResponse)
workzones = ofsc_instance.metadata.get_workzones(offset=0, limit=100)
print(f"Total workzones: {workzones.totalResults}")
for workzone in workzones.items:
print(f"Label: {workzone.workZoneLabel}, Name: {workzone.workZoneName}")
print(f" Status: {workzone.status}, Travel Area: {workzone.travelArea}")
if workzone.keys:
print(f" Keys: {', '.join(workzone.keys)}")
if workzone.shapes:
print(f" Shapes: {', '.join(workzone.shapes)}")
# Get a single workzone by label (returns Workzone)
workzone = ofsc_instance.metadata.get_workzone("ATLANTA")
print(f"Workzone: {workzone.workZoneName}")
print(f"Status: {workzone.status}")
# Replace/Update a workzone
updated_workzone = Workzone(
workZoneLabel="ATLANTA",
workZoneName="Atlanta Metro Area",
status="active",
travelArea="sunrise_enterprise",
keys=["ATL", "ATLANTA"],
shapes=["12345", "67890"],
organization="SOUTH_REGION"
)
result = ofsc_instance.metadata.replace_workzone(
workzone=updated_workzone,
auto_resolve_conflicts=True # Automatically resolve key conflicts with other zones
)
print(f"Updated workzone: {result.workZoneLabel}")
# Using FULL_RESPONSE for raw API response
from ofsc.common import FULL_RESPONSE
response = ofsc_instance.metadata.get_workzone(
"ATLANTA",
response_type=FULL_RESPONSE
)
if response.status_code == 200:
data = response.json()
print(f"Raw workzone data: {data}")
Test History
| OFS REST API Version | PyOFSC |
|---|---|
| 20C | 1.7 |
| 21A | 1.8, 1.8,1, 1.9 |
| 21D | 1.15 |
| 22B | 1.16, 1.17 |
| 22D | 1.18 |
| 24C | 2.0 |
Deprecation Warning
Starting in OFSC 2.0 all functions are called using the API name (Core or Metadata). See the examples.
Instead of
instance = OFSC(..)
list_of_activities = instance.get_activities(...)
It will be required to use the right API module:
instance = OFSC(..)
list_of_activites = instance.core.get_activities(...)
During the transition period a DeprecationWarning will be raised if the functions are used in the old way
What's new in OFSC 2.0
- All metadata functions now use models, when available
- All functions are now using the API name (Core or Metadata)
- All functions return a python object by default. If there is an available model it will be used, otherwise a dict will be returned (see
response_typeparameter andauto_modelparameter) - Errors during API calls can raise exceptions and will by default when returning an object (see
auto_raiseparameter) - OBJ_RESPONS and TEXT_RESPONSE are now deprecated. Use
response_typeparameter to control the response type
Future Deprecation Notice - OFSC 3.0
Important: Starting with OFSC 3.0, the synchronous client (OFSC) will be deprecated in favor of the async client (AsyncOFSC).
Migration Path
- The async client (
AsyncOFSC) is the recommended approach for all new development - OFSC 3.0 will provide a compatibility wrapper to allow existing synchronous code to continue working without modifications
- The compatibility wrapper will internally use the async client with synchronous adapters
- We recommend gradually migrating to the async client to take advantage of better performance and scalability
Migration Example
Current synchronous code:
from ofsc import OFSC
instance = OFSC(
clientID="your_client_id",
secret="your_secret",
companyName="your_company"
)
workzones = instance.metadata.get_workzones(offset=0, limit=100)
Migrated async code:
from ofsc.async_client import AsyncOFSC
async with AsyncOFSC(
clientID="your_client_id",
secret="your_secret",
companyName="your_company"
) as client:
workzones = await client.metadata.get_workzones(offset=0, limit=100)
Timeline
- OFSC 2.x: Both sync and async clients fully supported
- OFSC 3.0: Sync client deprecated, compatibility wrapper provided
- OFSC 4.0: Sync client may be removed (compatibility wrapper will remain for at least one major version)
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file ofsc-2.21.0.tar.gz.
File metadata
- Download URL: ofsc-2.21.0.tar.gz
- Upload date:
- Size: 348.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/2.2.1 CPython/3.11.0 Linux/6.11.0-1018-azure
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
404b854e91699dfb595dc677246ef6540be0b530ca3884478c5b106b0ccfded2
|
|
| MD5 |
d9ec83219d60ef74ae9bbc4bf0b6f744
|
|
| BLAKE2b-256 |
776abf605af48c280022c00927f0da81aaaff4dca071c24577b91217e3720561
|
File details
Details for the file ofsc-2.21.0-py3-none-any.whl.
File metadata
- Download URL: ofsc-2.21.0-py3-none-any.whl
- Upload date:
- Size: 45.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/2.2.1 CPython/3.11.0 Linux/6.11.0-1018-azure
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
3343005ef147460042ebf134f359a07af46198119d54efbd189d504474be8ece
|
|
| MD5 |
a0103724df4970ee64e34736295efc0b
|
|
| BLAKE2b-256 |
e9a5954fe17dc3f5664f7267df83e39f059dc3c883b2880beb10708fddb6622a
|