Skip to main content

Combining LLMs and ASP for intelligent problem-solving and reasoning.

Project description

Installation

To install the package from PyPI, run the following command:

pip install llmasp

How to use

Application Specification File

The application file (app.yml in the example) is a configuration file that defines the problem-solving logic for your application using Answer Set Programming (ASP). It includes sections for preprocessing, knowledge base (the main logic), and postprocessing.

Structure of app.yml

The app.yml file is structured into three main sections:

  1. Preprocessing
  2. Knowledge Base
  3. Postprocessing

Each section has its own role in preparing the application, encoding the problem-solving logic, and formatting the output.

In the preprocessing section, you can define context information that should be applied before solving the problem and mappings from natural text to facts. The knowledge base section is where you define the main logic of your problem using ASP rules. These rules are used to encode the constraints, relationships, and logic that will drive the decision-making process. The postprocessing section defines the actions to take after the solution is generated by the ASP solver. This section allows you to format and refine the results, as well as provide specific responses.

Example from the file:

preprocessing:
- _: You are helping a user with their datalog questions.
- edge(node1,node2): List all the edges from 'node1' to 'node2'.
- reaches(node1,node2): Asks whether 'node2' is reachable from 'node1'.

knowledge_base: |
  reaches(X,Y) :- edge(X,Y).
  reaches(X,Y) :- edge(X,Z), reaches(Z,Y).

postprocessing:
- _: You are helping a user with their datalog questions.
- reaches(node1,node2): Say that 'node2' is reachable from 'node1'. 

Behavior Specification File

The behavior specification file (beh.yml in the example) defines a global behavior and how the application should process and transform data during both the preprocessing and postprocessing stages. It guides the system in converting natural language input to Answer Set Programming (ASP) code and vice versa, ensuring the logic and output are aligned with the intended behavior.

The file is divided into the following sections:

  1. Preprocessing
  2. Postprocessing

Each section serves a different purpose in processing the data from natural language input to ASP code, and from ASP code back to natural language. The preprocessing section is responsible for converting natural language input into ASP code. It includes instructions that guide the translation process, ensuring that natural language descriptions are accurately and logically reflected in the corresponding ASP predicates. The postprocessing section is responsible for converting ASP facts into natural language output. This section defines how the results, represented as ASP facts, should be translated back into human-readable sentences.

preprocessing:
  init: | 
    As an ASP translator, your primary task is to convert natural language descriptions, 
    provided in the format [INPUT]input[/INPUT], into precise ASP code, outputting 
    in the format [OUTPUT]predicate(terms).[/OUTPUT]. Focus on identifying key entities 
    and relationships to create facts (e.g., [INPUT]Alice is happy[/INPUT] becomes [OUTPUT]happy(alice).[/OUTPUT]), 
    [INPUT]Bob owns a car[/INPUT] becomes [OUTPUT]owns(bob, car)[/OUTPUT],
    [INPUT]The sky is blue[/INPUT] becomes [OUTPUT]color(sky, blue)[/OUTPUT], 
    and [INPUT]Cats are mammals[/INPUT] becomes [OUTPUT]mammal(cat)[/OUTPUT]. 
    Ensure that the natural language intent is accurately and logically reflected in the ASP code. 
    Maintain semantic accuracy by ensuring logical consistency and correctly reflecting 
    the natural language intent in your ASP code.
  context: |
    Here is some context that you MUST analyze and always remember.
    {context}
    Remember this context and don't say anything!
  mapping: |
    [INPUT]{input}[/INPUT]
    {instructions}
    [OUTPUT]{atom}[/OUTPUT]

postprocessing:
  init: |
    As an ASP to natural language translator, you will convert ASP facts provided in the format 
    [FACTS]atoms[/FACTS] into clear natural language statements using predefined mapping instructions. 
    For example, [FACTS]happy(alice)[/FACTS] should be translated to "Alice is happy," 
    [FACTS]friend(alice, bob)[/FACTS] to "Alice is friends with Bob," and [FACTS]owns(bob, car)[/FACTS] 
    to "Bob owns a car." Ensure each fact is accurately and clearly represented in natural language, 
    maintaining the integrity of the original information.
  context: |
    Here is some context that you MUST analyze and remember.
    {context}
    Remember this context and don't say anything!
  mapping: |
    [FACTS]{facts}[/FACTS]
    Each fact matching {atom} must be interpreted as follows: {intructions}
  summarize: |
    "Summarize the following responses: {responses}"

Example

Here is a simple example that demonstrates how to use it:

from llmasp import llm, asp

# Specify a model name
model_name = "llama3.1:8b"

# Specify ollama server
server = "http://localhost:11434/v1"

# If you are running the model locally using Ollama and your server does not required it, there is no need for an API key.
# Otherwise, if using OpenAI, an API key is required:
# server = "https://api.openai.com/v1"
# api_key = "YOUR_API_KEY"

# Create an LLM handler instance
llm_handler = llm.LLMHandler(model_name, server)

# Create a solver instance
solver = asp.Solver()

# Initialize the LLMASP instance with configuration files and handler
llmasp_instance = llm.LLMASP("app.yml", "beh.yml", llm_handler, solver)

# Convert natural language to ASP query
user_input = "There are directed edges from node 1 to node 3, from node 3 to node 4, from node 3 to node 5, from node 4 to node 2, and from node 2 to node 5. Is node 2 reachable from node 1?"
# Indicate whether single or multiple queries should be made to the llm for facts extraction. 
# It's default value is False
single_pass = True
# The natural_to_asp function can also receive a max_tokens parameter to control the maximum length of the LLM's response. 
# This value can either be an integer, specifying the maximum number of tokens for the completion, or None if there is no limit. 
# By default, the max_tokens parameter is set to None, meaning there is no restriction on the number of tokens in the response.
max_tokens = 200  # Set max_tokens to limit the response length to 200 tokens
created_facts, asp_input, queries, meta = pipeline.natural_to_asp(user_input, single_pass=single_pass, max_tokens=max_tokens)

# Give the input to the solver
result, interrupted, satisfiable  = solver.solve(asp_input)

# Convert ASP results to natural language
# If you want to give more context to the model, you can pass the history
# history = queries
natural_response = llmasp_instance.asp_to_natural(result, history=[], use_history=False)

print(natural_response)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llmasp-0.1.8.tar.gz (10.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llmasp-0.1.8-py3-none-any.whl (11.8 kB view details)

Uploaded Python 3

File details

Details for the file llmasp-0.1.8.tar.gz.

File metadata

  • Download URL: llmasp-0.1.8.tar.gz
  • Upload date:
  • Size: 10.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.12.4 Windows/10

File hashes

Hashes for llmasp-0.1.8.tar.gz
Algorithm Hash digest
SHA256 5da86a304eff8d9d5ee82f0ea50b8a357092911f61e43dd87726aee2b9f0dc8d
MD5 cd7fe24a83bcf56ec1aa7e466ec25898
BLAKE2b-256 af04cf4372efefe4da84f5350e95a8eee0531c9cc5bc5947f90ad47cd786464b

See more details on using hashes here.

File details

Details for the file llmasp-0.1.8-py3-none-any.whl.

File metadata

  • Download URL: llmasp-0.1.8-py3-none-any.whl
  • Upload date:
  • Size: 11.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.12.4 Windows/10

File hashes

Hashes for llmasp-0.1.8-py3-none-any.whl
Algorithm Hash digest
SHA256 70ada89b73ee60872284bd226498609e53805fe1e2957fce8defb3eed582d21c
MD5 7ce7704c713272cd26bfaaae4934810a
BLAKE2b-256 a4b8be87ec58355320dc8098ffd31a8d44d31e452ea6656e0fc9ff159b217b0e

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page