Skip to main content

The official Python SDK for Macrocosmos

Project description

Macrocosmos Python SDK

The offical Python SDK for Macrocosmos.

Installation

Using pip

pip install macrocosmos

Using uv

uv add macrocosmos

Usage

For a comprehensive overview of available functionality and integration patterns, refer to the Macrocosmos SDK guide.

Apex

Apex is a decentralized agentic inference engine powered by Subnet 1 on the Bittensor network. You can read more about this subnet on the Macrocosmos Apex page.

Use the synchronous ApexClient or asynchronous AsyncApexClient for inferencing tasks. See the examples for additional features and functionality.

Chat Completions

import macrocosmos as mc

client = mc.ApexClient(api_key="<your-api-key>", app_name="my_app")
response = client.chat.completions.create(
    messages=[{"role": "user", "content": "Write a short story about a cosmonaut learning to paint."}],
)

print(response)

Web Search

import macrocosmos as mc

client = mc.ApexClient(api_key="<your-api-key>", app_name="my_app")
response = client.web_search.search(
    search_query="What is Bittensor?",
    max_results_per_miner=3,
    max_response_time=20,
)

print(response)

Deep Researcher

Submit a deep researcher job

import macrocosmos as mc

client = mc.ApexClient(api_key="<your-api-key>", app_name="my_app")
submitted_response = client.deep_research.create_job(
        messages=[
            {
                "role": "user",
                "content": """Can you propose a mechanism by which a decentralized network 
                of AI agents could achieve provable alignment on abstract ethical principles 
                without relying on human-defined ontologies or centralized arbitration?""",
            }
        ]
    )

print(submitted_response)

Retrieve the results of a deep researcher job

import macrocosmos as mc

client = mc.ApexClient(api_key="<your-api-key>", app_name="my_app")
polled_response = client.deep_research.get_job_results(job_id="<your-job-id>")

print(polled_response)

SN13 OnDemandAPI

SN13 is focused on large-scale data collection. With the OnDemandAPI, you can run precise, real-time queries against platforms like X (Twitter), Reddit and YouTube.

As of the latest data-universe release:

  • Users may select two post-filtering modes via the keyword_mode parameter:
    • "any": Returns posts that contain any combination of the listed keywords.
    • "all": Returns posts that contain all of the keywords (default).
  • For Reddit requests, the first keyword in the list corresponds to the requested subreddit, and subsequent keywords are treated as normal.
  • For YouTube requests, only one of the following should be applied: One username (corresponding to YouTube channel name) or one keyword (corresponding to one YouTube video URL).

Use the synchronous Sn13Client to query historical or current data based on users, keywords, and time range.

Query Example

import macrocosmos as mc

client = mc.Sn13Client(api_key="<your-api-key>", app_name="my_app")

response = client.sn13.OnDemandData(
    source='X',                 # or 'Reddit'
    usernames=["@nasa"],        # Optional, up to 5 users
    keywords=["galaxy"],        # Optional, up to 5 keywords
    start_date='2025-04-15',    # Defaults to 24h range if not specified
    end_date='2025-05-15',      # Defaults to current time if not specified
    limit=1000,                 # Optional, up to 1000 results
    keyword_mode='any'          # Optional, "any" or "all"
)

print(response)

Gravity

Gravity is a decentralized data collection platform powered by Subnet 13 (Data Universe) on the Bittensor network. You can read more about this subnet on the Macrocosmos Data Universe page.

Use the synchronous GravityClient or asynchronous AsyncGravityClient for creating and monitoring data collection tasks. See the examples/gravity_workflow_example.py for a complete working example of a data collection CLI you can use for your next big project or to plug right into your favorite data product.

Creating a Gravity Task for Data Collection

Gravity tasks will immediately be registered on the network for miners to start working on your job. The job will stay registered for 7 days. After which, it will automatically generate a dataset of the data that was collected and an email will be sent to the email address you specify.

import macrocosmos as mc

client = mc.GravityClient(api_key="<your-api-key>", app_name="my_app")

gravity_tasks = [
    {"topic": "#ai", "platform": "x"},
    {"topic": "r/MachineLearning", "platform": "reddit"},
]

notification = {
    "type": "email",
    "address": "<your-email-address>",
    "redirect_url": "https://app.macrocosmos.ai/",
}

response =  client.gravity.CreateGravityTask(
    gravity_tasks=gravity_tasks, name="My First Gravity Task", notification_requests=[notification]
)

# Print the gravity task ID
print(response)

Get the status of a Gravity Task and its Crawlers

If you wish to get further information about the crawlers, you can use the include_crawlers flag or make separate GetCrawler() calls since returning in bulk can be slow.

import macrocosmos as mc

client = mc.GravityClient(api_key="<your-api-key>", app_name="my_app")

response = client.gravity.GetGravityTasks(gravity_task_id="<your-gravity-task-id>", include_crawlers=False)

# Print the details about the gravity task and crawler IDs
print(response)

Build Dataset

If you do not want to wait 7-days for your data, you can request it earlier. Add a notification to get notified when the build is complete or you can monitor the status by calling GetDataset(). Once the dataset is built, the gravity task will be de-registered. Calling CancelDataset() will cancel a build in-progress or, if it's already complete, will purge the created dataset.

import macrocosmos as mc

client = mc.GravityClient(api_key="<your-api-key>", app_name="my_app")

notification = {
    "type": "email",
    "address": "<your-email-address>",
    "redirect_url": "https://app.macrocosmos.ai/",
}

response = client.gravity.BuildDataset(
    crawler_id="<your-crawler-id>", notification_requests=[notification]
)

# Print the dataset ID
print(response)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

macrocosmos-2.0.0.tar.gz (159.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

macrocosmos-2.0.0-py3-none-any.whl (85.9 kB view details)

Uploaded Python 3

File details

Details for the file macrocosmos-2.0.0.tar.gz.

File metadata

  • Download URL: macrocosmos-2.0.0.tar.gz
  • Upload date:
  • Size: 159.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.6.10

File hashes

Hashes for macrocosmos-2.0.0.tar.gz
Algorithm Hash digest
SHA256 dcc55fa37f1304da5173f83eb5e87234f7852a03cc8babefa3df224aee5725a6
MD5 7252bee317abd4cd7a6ff265186781a8
BLAKE2b-256 61d063656f88cf80b6dabd75c6aebf952a1e265686ac0b5655866cec645cd621

See more details on using hashes here.

File details

Details for the file macrocosmos-2.0.0-py3-none-any.whl.

File metadata

File hashes

Hashes for macrocosmos-2.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 a6796a8f01be0976f36767cae8923330b1c1abb09fd9a53432d3ee8a76230fe8
MD5 34960ecc83a098f44b61e3cd05a40527
BLAKE2b-256 c0a6e0fd58e5c571913ef2f225010372e3c9715fe301d93161cf7219f431826c

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page