Skip to main content

Combining LLMs and ASP for intelligent problem-solving and reasoning.

Project description

Installation

To install the package from PyPI, run the following command:

pip install llmasp

How to use

Application Specification File

The application file (app.yml in the example) is a configuration file that defines the problem-solving logic for your application using Answer Set Programming (ASP). It includes sections for preprocessing, knowledge base (the main logic), and postprocessing.

Structure of app.yml

The app.yml file is structured into three main sections:

  1. Preprocessing
  2. Knowledge Base
  3. Postprocessing

Each section has its own role in preparing the application, encoding the problem-solving logic, and formatting the output.

In the preprocessing section, you can define context information that should be applied before solving the problem and mappings from natural text to facts. The knowledge base section is where you define the main logic of your problem using ASP rules. These rules are used to encode the constraints, relationships, and logic that will drive the decision-making process. The postprocessing section defines the actions to take after the solution is generated by the ASP solver. This section allows you to format and refine the results, as well as provide specific responses.

Example from the file:

preprocessing:
- _: You are helping a user with their datalog questions.
- edge(node1,node2): List all the edges from 'node1' to 'node2'.
- reaches(node1,node2): Asks whether 'node2' is reachable from 'node1'.

knowledge_base: |
  reaches(X,Y) :- edge(X,Y).
  reaches(X,Y) :- edge(X,Z), reaches(Z,Y).

postprocessing:
- _: You are helping a user with their datalog questions.
- reaches(node1,node2): Say that 'node2' is reachable from 'node1'. 

Behavior Specification File

The behavior specification file (beh.yml in the example) defines a global behavior and how the application should process and transform data during both the preprocessing and postprocessing stages. It guides the system in converting natural language input to Answer Set Programming (ASP) code and vice versa, ensuring the logic and output are aligned with the intended behavior.

The file is divided into the following sections:

  1. Preprocessing
  2. Postprocessing

Each section serves a different purpose in processing the data from natural language input to ASP code, and from ASP code back to natural language. The preprocessing section is responsible for converting natural language input into ASP code. It includes instructions that guide the translation process, ensuring that natural language descriptions are accurately and logically reflected in the corresponding ASP predicates. The postprocessing section is responsible for converting ASP facts into natural language output. This section defines how the results, represented as ASP facts, should be translated back into human-readable sentences.

preprocessing:
  init: | 
    As an ASP translator, your primary task is to convert natural language descriptions, 
    provided in the format [INPUT]input[/INPUT], into precise ASP code, outputting 
    in the format [OUTPUT]predicate(terms).[/OUTPUT]. Focus on identifying key entities 
    and relationships to create facts (e.g., [INPUT]Alice is happy[/INPUT] becomes [OUTPUT]happy(alice).[/OUTPUT]), 
    [INPUT]Bob owns a car[/INPUT] becomes [OUTPUT]owns(bob, car)[/OUTPUT],
    [INPUT]The sky is blue[/INPUT] becomes [OUTPUT]color(sky, blue)[/OUTPUT], 
    and [INPUT]Cats are mammals[/INPUT] becomes [OUTPUT]mammal(cat)[/OUTPUT]. 
    Ensure that the natural language intent is accurately and logically reflected in the ASP code. 
    Maintain semantic accuracy by ensuring logical consistency and correctly reflecting 
    the natural language intent in your ASP code.
  context: |
    Here is some context that you MUST analyze and always remember.
    {context}
    Remember this context and don't say anything!
  mapping: |
    [INPUT]{input}[/INPUT]
    {instructions}
    [OUTPUT]{atom}[/OUTPUT]

postprocessing:
  init: |
    As an ASP to natural language translator, you will convert ASP facts provided in the format 
    [FACTS]atoms[/FACTS] into clear natural language statements using predefined mapping instructions. 
    For example, [FACTS]happy(alice)[/FACTS] should be translated to "Alice is happy," 
    [FACTS]friend(alice, bob)[/FACTS] to "Alice is friends with Bob," and [FACTS]owns(bob, car)[/FACTS] 
    to "Bob owns a car." Ensure each fact is accurately and clearly represented in natural language, 
    maintaining the integrity of the original information.
  context: |
    Here is some context that you MUST analyze and remember.
    {context}
    Remember this context and don't say anything!
  mapping: |
    [FACTS]{facts}[/FACTS]
    Each fact matching {atom} must be interpreted as follows: {intructions}
  summarize: |
    "Summarize the following responses: {responses}"

Example

Here is a simple example that demonstrates how to use it:

from llmasp import llm, asp

# Specify a model name
model_name = "llama3.1:8b"

# Specify ollama server
server = "http://localhost:11434/v1"

# Create an LLM handler instance
llm_handler = llm.LLMHandler(model_name, server)

# Create a solver instance
solver = asp.Solver()

# Initialize the LLMASP instance with configuration files and handler
llmasp_instance = llm.LLMASP("app.yml", "beh.yml", llm_handler, solver)

# Convert natural language to ASP query
user_input = "There are directed edges from node 1 to node 3, from node 3 to node 4, from node 3 to node 5, from node 4 to node 2, and from node 2 to node 5. Is node 2 reachable from node 1?"
# Indicate whether single or multiple queries should be made to the llm for facts extraction. 
# It's default value is False
single_pass = True
# The natural_to_asp function can also receive a max_tokens parameter to control the maximum length of the LLM's response. 
# This value can either be an integer, specifying the maximum number of tokens for the completion, or None if there is no limit. 
# By default, the max_tokens parameter is set to None, meaning there is no restriction on the number of tokens in the response.
max_tokens = 200  # Set max_tokens to limit the response length to 200 tokens
created_facts, asp_input, queries, meta = pipeline.natural_to_asp(user_input, single_pass=single_pass, max_tokens=max_tokens)

# Give the input to the solver
result, interrupted, satisfiable  = solver.solve(asp_input)

# Convert ASP results to natural language
# If you want to give more context to the model, you can pass the history
# history = queries
natural_response = llmasp_instance.asp_to_natural(result, history=[], use_history=False)

print(natural_response)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llmasp-0.1.7.tar.gz (10.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llmasp-0.1.7-py3-none-any.whl (11.7 kB view details)

Uploaded Python 3

File details

Details for the file llmasp-0.1.7.tar.gz.

File metadata

  • Download URL: llmasp-0.1.7.tar.gz
  • Upload date:
  • Size: 10.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.12.4 Windows/10

File hashes

Hashes for llmasp-0.1.7.tar.gz
Algorithm Hash digest
SHA256 84a8309c50ee332308e9616b71ae9eed189097908d8155444cad467c890beedc
MD5 6daac7df0817f0a81b76a00d3d710eb4
BLAKE2b-256 66c671418c7fb0aa56f39230cba8413cb103083abf1bca21fd1b3b5061171014

See more details on using hashes here.

File details

Details for the file llmasp-0.1.7-py3-none-any.whl.

File metadata

  • Download URL: llmasp-0.1.7-py3-none-any.whl
  • Upload date:
  • Size: 11.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.12.4 Windows/10

File hashes

Hashes for llmasp-0.1.7-py3-none-any.whl
Algorithm Hash digest
SHA256 dc818e7ac19e267767250ba13b3bfdf577c8a54c298f064c78520308c88d7c9e
MD5 a2cf7280b9ff9c4766a9f23505b4160b
BLAKE2b-256 706b45739ab70bb87c145961178af00fdb0525aa13ced88bb3a8ec48b4cef0af

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page