Skip to main content

Prolog-like interpreter and tuple store

Project description

A lightweight Prolog-like interpreter with a natural-language style syntax and deeply indexed tuple database interface

We closely follow Einstein's "Everything should be made as simple as possible, but no simpler."

At this point, we rely on Python's natural error checking, without doing much to warn about syntactic or semantic errors. This can be added, but this is meant as an executable specification of an otherwise simple and natural logic language that we hereby name Natlog.

Natlog : a succinct overview

  • Terms are represented as nested tuples.

  • A parser and scanner for a simplified Prolog term syntax is used to turn terms into nested Python tuples.

Surface syntax of facts, as read from strings, is just whitespace separated words (with tuples parenthesized) and sentences ended with . or ?. Like in Prolog, variables are capitalized, unless quoted. Example programs are in folder natprogs, for instance tc.nat:

cat is feline.
tiger is feline.
mouse is rodent.
feline is mammal.
rodent is mammal.
snake is reptile.
mammal is animal.
reptile is animal.

tc A Rel B : A Rel B.
tc A Rel C : A Rel B, tc B Rel C.

After

pip3 install -U natlog

To query it, try:

>>> from natlog import Natlog, natprogs
>>> n=Natlog(file_name=natprogs()+"tc.nat")
>>> n.query("tc Who is animal ?")

It will return answers based on the the transitive closure of the is relation.

QUERY: tc Who is animal ?
ANSWER: {'Who': 'cat'}
ANSWER: {'Who': 'tiger'}
ANSWER: {'Who': 'mouse'}
ANSWER: {'Who': 'feline'}
ANSWER: {'Who': 'rodent'}
ANSWER: {'Who': 'snake'}
ANSWER: {'Who': 'mammal'}
ANSWER: {'Who': 'reptile'}

List processing is also supported as in:

app () Ys Ys. 
app (X Xs) Ys (X Zs) : app Xs Ys Zs.

The interpreter supports a yield mechanism, similar to Python's own. Something like ^ my_answer X resulting in my_answer X to be yield as an answer.

The interpreter has also been extended to handle simple function and generator calls to Python using the same prefix operator syntax:

  • `f A B .. Z R, resulting in Python function f(A,B,C) being called and R unified with its result
  • ``f A B .. Z R, resulting in Python generator f(A,B,C) being called and R unified with its multiple yields, one a time
  • ~R A B .. Z for unifying ~ R A B .. Z with matching facts in the term store
  • # f A B .. Z, resulting in f(A,B,C,..,Z) being called with no result returned
  • $ V X, resulting in value of variable named V being unified with X
  • eng X G E,resulting in first class natlog engine with answer pattern X and goal G being bound to E
  • ask G A, resulting in next answer of engine E being unified to A

Take a look at natprogs/lib.nat for examples of built-ins obtained by extending this interface, mostly at source level.

A nested tuple store for unification-based tuple mining

An indexer in combination with the unification algorithm is used to retrieve ground terms matching terms containing logic variables.

Indexing is on all constants occurring in ground facts placed in a database.

As facts are ground, unification has occurs check and trailing turned off when searching for a match.

To try it out, do:

python3 -i

>>> from natlog.test.tests import *
>>> dtest()

It gives, after digesting a text and then querying it:

QUERY: Who has (a What)?
--> ('John', 'has', ('a', 'car'))
--> ('Mary', 'has', ('a', 'bike'))

QUERY: Who is (a pilot)?
--> ('John', 'is', ('a', 'pilot'))

QUERY: 'Mary' is What?
--> ('Mary', 'is', ('a', 'student'))

QUERY: 'John' is (a What)?
--> ('John', 'is', ('a', 'pilot'))

QUERY: Who is What?
--> ('Mary', 'is', ('a', 'student'))
--> ('John', 'is', ('a', 'pilot'))

Neuro-symbolic tuple database

As an extension to the nested tuple store the neuro-symbolic tuple database uses a machine learning algorithm instead of its indexer.Thus it offers the same interface as the tuple store that it extends. The learner is trained upon loading the database file (from a .nat, .csv or .tsv file) and its inference mechanism is triggered when facts from the database are queried. The stream of tuples returned from the query is then filtered via unification (and possibly, more general integrity constraints, expressed via logic programming constructs).

Example of usage (see more at https://github.com/ptarau/minlog/blob/main/natlog/test/tests.py )

def ndb_test() :
  nd = neural_natlog(file_name=natprogs()+"dbtc.nat",db_name=natprogs()+"db.nat")
  print('RULES')
  print(nd)
  print('DB FACTS')
  print(nd.db)
  nd.query("tc Who is_a animal ?")

The output will show the X and y numpy arrays used to fit the sklearn learner and then the logic program's rules and the facts from which the arrays were extracted when the facts were loaded.

X:
 [[1 0 0 0 0 0 0 0 0 0 0 0]
 [0 1 0 0 0 0 0 0 0 0 0 0]
 [0 0 1 0 0 0 0 0 0 0 0 0]
 [0 0 0 1 0 0 0 0 0 0 0 0]
 [0 0 0 0 1 0 0 0 0 0 0 0]
 [0 0 0 0 0 1 0 0 0 0 0 0]
 [0 0 0 0 0 0 1 0 0 0 0 0]
 [0 0 0 0 0 0 0 1 0 0 0 0]
 [0 0 0 0 0 0 0 0 1 0 0 0]
 [0 0 0 0 0 0 0 0 0 1 0 0]
 [0 0 0 0 0 0 0 0 0 0 1 0]
 [0 0 0 0 0 0 0 0 0 0 0 1]]

y:
 [[1 0 1 0 0 0 0 0 0 0]
 [1 1 1 1 1 1 1 1 1 1]
 [1 0 0 0 0 0 0 0 0 0]
 [0 1 0 1 0 0 0 0 0 0]
 [0 1 0 0 0 0 0 0 0 0]
 [0 0 1 1 0 1 0 0 0 0]
 [0 0 0 0 1 0 0 0 0 0]
 [0 0 0 0 1 0 1 0 0 0]
 [0 0 0 0 0 1 1 0 0 1]
 [0 0 0 0 0 0 0 1 1 1]
 [0 0 0 0 0 0 0 1 0 0]
 [0 0 0 0 0 0 0 0 1 0]] 

RULES
(('cat', 'is_a', 'feline'), ())
 ((_0, 'is_a', _1), (('~', _0, 'is', _1),))
 (('tc', _0, _1, _2), ((_0, _1, _3), ('tc1', _3, _1, _2)))
 (('tc1', _0, _1, _0), ())
 (('tc1', _0, _1, _2), (('tc', _0, _1, _2),))

DB FACTS
(0, ('tiger', 'is', 'feline'))
(1, ('mouse', 'is', 'rodent'))
(2, ('feline', 'is', 'mammal'))
(3, ('rodent', 'is', 'mammal'))
(4, ('snake', 'is', 'reptile'))
(5, ('mammal', 'is', 'animal'))
(6, ('reptile', 'is', 'animal'))
(7, ('bee', 'is', 'insect'))
(8, ('ant', 'is', 'insect'))
(9, ('insect', 'is', 'animal'))

QUERY: tc Who is_a animal ?
ANSWER: {'Who': 'cat'}
ANSWER: {'Who': 'tiger'}
ANSWER: {'Who': 'mouse'}
ANSWER: {'Who': 'feline'}
ANSWER: {'Who': 'rodent'}
ANSWER: {'Who': 'snake'}
ANSWER: {'Who': 'mammal'}
ANSWER: {'Who': 'reptile'}
ANSWER: {'Who': 'bee'}
ANSWER: {'Who': 'ant'}
ANSWER: {'Who': 'insect'}


Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

natlog-1.3.2.tar.gz (255.5 kB view details)

Uploaded Source

Built Distribution

natlog-1.3.2-py3-none-any.whl (249.6 kB view details)

Uploaded Python 3

File details

Details for the file natlog-1.3.2.tar.gz.

File metadata

  • Download URL: natlog-1.3.2.tar.gz
  • Upload date:
  • Size: 255.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.0 CPython/3.9.9

File hashes

Hashes for natlog-1.3.2.tar.gz
Algorithm Hash digest
SHA256 29c08d82f547fc071a5a525a65c5b2fa0ced6ca169b67c8559eab9b85290e3fb
MD5 187015cc6d2860cb7ce850a61f482a46
BLAKE2b-256 7683dc032d4f98dc8ea7484ca9f20638e89490d61d14053fa005d1a4e399fd18

See more details on using hashes here.

File details

Details for the file natlog-1.3.2-py3-none-any.whl.

File metadata

  • Download URL: natlog-1.3.2-py3-none-any.whl
  • Upload date:
  • Size: 249.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.0 CPython/3.9.9

File hashes

Hashes for natlog-1.3.2-py3-none-any.whl
Algorithm Hash digest
SHA256 0ee77b53a068d6b2ff7ad90fbbfa44e3b74547a961970286032a0883a7184053
MD5 8ef6ae2b4eff8ee88d4c9f7d4bc739b9
BLAKE2b-256 addb563794fd2c8a30ec2a6100a8bc582859824cb180061b5fd4f65a7ec73576

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page