Skip to main content

Synthesizer: A Framework for LLM Powered Data.

Project description

Synthesizer[ΨΦ]: A multi-purpose LLM framework 💡

SciPhi Logo

With Synthesizer, users can:

  • Custom Data Creation: Generate datasets via LLMs that are tailored to your needs.
    • Anthropic, OpenAI, vLLM, and HuggingFace.
  • Retrieval-Augmented Generation (RAG) on Demand: Built-in RAG Provider Interface to anchor generated data to real-world sources.
    • Turnkey integration with Agent Search API.
  • Custom Data Creation: Generate datasets via LLMs that are tailored to your needs, for LLM training, RAG, and more.

Documentation

For more detailed information, tutorials, and API references, please visit the official Synthesizer Documentation.

Fast Setup

pip install sciphi-synthesizer

Features

Community & Support

  • Engage with our vibrant community on Discord.
  • For tailored inquiries or feedback, please email us.

Example

The following example demonstrates how to construct a connection to the AgentSearch API with the synthesizer RAG interface. Then, the example goes on to use the RAG interface to generate a response with an OpenAI hosted LLM.

   from synthesizer.core import LLMProviderName, RAGProviderName
   from synthesizer.interface import (
      LLMInterfaceManager,
      RAGInterfaceManager,
   )
   from synthesizer.llm import GenerationConfig

   # RAG Provider Settings
   rag_interface = RAGInterfaceManager.get_interface_from_args(
      RAGProviderName(rag_provider_name),
      api_base=rag_api_base,
      limit_hierarchical_url_results=rag_limit_hierarchical_url_results,
      limit_final_pagerank_results=rag_limit_final_pagerank_results,
   )
   rag_context = rag_interface.get_rag_context(query)

   # LLM Provider Settings
   llm_interface = LLMInterfaceManager.get_interface_from_args(
      LLMProviderName(llm_provider_name),
   )

   generation_config = GenerationConfig(
      model_name=llm_model_name,
      max_tokens_to_sample=llm_max_tokens_to_sample,
      temperature=llm_temperature,
      top_p=llm_top_p,
      # other generation params here ...
   )

   formatted_prompt = rag_prompt.format(rag_context=rag_context)
   completion = llm_interface.get_completion(
      formatted_prompt, generation_config
   )
   print(completion)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

sciphi_synthesizer-1.0.0.tar.gz (135.6 kB view hashes)

Uploaded Source

Built Distribution

sciphi_synthesizer-1.0.0-py3-none-any.whl (149.4 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page