Skip to main content

ilo li moku e toki li pana e sona ni: ni li toki ala toki pona?

Project description

sona toki

Test workflow for this library Version number for this library

What is sona toki?

This library, "Language Knowledge," helps you identify whether a message is in Toki Pona. It does so by determining whether a large enough number of words in a statement are "in Toki Pona". No grammar checking, yet.

I wrote this library with a variety of scraps and lessons learned from a prior project, ilo pi toki pona taso, "toki-pona-only tool". That tool now uses this library to great success!

If you've ever worked on a similar project, you know the question "is this message in [language]" is not a consistent one- the environment, topic, preferences of the speaker, and much more, can all alter whether a given message is "in" any specific language. This complexity applies to Toki Pona too.

So, this project "solves" that complex problem by offering an opinionated tokenizer and a configurable parser, allowing you to tune its output to your preferences and goals. Even silly ones.

Quick Start

Install with your preferred Python package manager. Example:

pdm init  # if your pyproject.toml doesn't exist yet
pdm add sonatoki

Then get started with a script along these lines:

from sonatoki.ilo import Ilo
from sonatoki.Configs import PrefConfig

def main():
    ilo = Ilo(**PrefConfig)
    ilo.is_toki_pona("imagine how is touch the sky")  # False
    ilo.is_toki_pona("o pilin insa e ni: sina pilin e sewi")  # True
    ilo.is_toki_pona("I Think I Can Evade Detection")  # False

if __name__ == "__main__":
    main()

Or if you'd prefer to configure on your own:

from copy import deepcopy
from sonatoki.ilo import Ilo
from sonatoki.Configs import BaseConfig
from sonatoki.Filters import NimiLinkuCore, NimiLinkuCommon, Phonotactic, ProperName, Or
from sonatoki.Scorers import SoftPassFail

def main():
    config = deepcopy(BaseConfig)
    config["scoring_filters"].extend([Or(NimiLinkuCore, NimiLinkuCommon), Phonotactic, ProperName])
    config["scorer"] = SoftPassFail

    ilo = Ilo(**config)
    ilo.is_toki_pona("mu mu!")  # True
    ilo.is_toki_pona("mi namako e moku mi")  # True
    ilo.is_toki_pona("ma wulin")  # False

if __name__ == "__main__":
    main()

Ilo is highly configurable by necessity, so I recommend looking through the premade configs in Configs as well as the individual Preprocessors, Filters, and Scorers. In Cleaners, all you need is ConsecutiveDuplicates. In Tokenizers, the preferred tokenizers WordTokenizer and SentTokenizer are already the default in Ilo.

Development

  1. Install pdm
  2. pdm install --dev
  3. Open any file you like!

FAQ

Why isn't this README/library written in Toki Pona?

The intent is to show our methodology to the Unicode Consortium, particularly to the Script Encoding Working Group (previously the Script Ad Hoc Group). As far as we're aware, zero members of the committee know Toki Pona, which unfortunately means we fall back on English.

I originally intended to translate this file and library into Toki Pona once Unicode had reviewed our proposal, but this library has picked up some interest outside of the Toki Pona community, so this library and README will remain accessible to them.

What's the deal with the tokenizers?

The Toki Pona tokenizer sonatoki.Tokenizers.WordTokenizer attempts to tokenize statements such that every token either represents a word candidate ("toki", "mumumu") or a complete non-candidate ("..!", "123"). This design is highly undesirable for NLTK's English tokenizer because words in languages other than Toki Pona can have punctuation characters in or around them which are part of the word. Toki Pona doesn't have any mid-word symbols when rendered in the Latin alphabet or in Private Use Area Unicode characters, so a more aggressive tokenizer is highly desirable. However, this tokenizer doesn't ignore intra-word punctuation entirely. Instead, exactly one of - or ' is allowed at a time, so long as both of its neighbors are writing characters. This increases the accuracy of the tokenizer significantly, and makes identifying Toki Pona sentences among arbitrary ones similarly more accurate.

The goal of splitting into word candidates and non-candidates is important, because any encoding of Toki Pona's logographic script will require each character be split into its own token, where the default behavior would be to leave consecutive non-punctuation together.

Aren't there a lot of false positives?

For any individual filter, yes. Here are some examples:

  • ProperName will errantly match text in languages without a capital/lowercase distinction
  • Alphabetic matches words so long as they are only made of letters in Toki Pona's alphabet, which is 14 letters of the Latin alphabet.
  • Syllabic and Phonetic, despite imposing more structure than Alphabetic, will match a surprising amount of English words. For example, every word in "an awesome joke!" matches.
  • NimiPu and NimiLinkuCore will match a, mute, open regardless of the surrounding language.

This is point of Ilo and the Scorers: None of these filters would individually be able to correctly identify a Toki Pona statement, but all of them working together with some tuning are able to achieve a surprisingly high accuracy.

Don't some of the cleaners/filters conflict?

Yes, though not terribly much.

  • ConsecutiveDuplicates may errantly change a word's validity. For example, "manna" is phonotactically invalid in Toki Pona, but would become "mana" which is valid.
  • ConsecutiveDuplicates will not work correctly with syllabaries, though this should not change the validity of the analyzed word unless you attempt to dictionary match these words.
  • If you build your own MemberFilter with words that have capital letters or consecutive duplicates, they will never match unless you use prep_dictionary.

You'll notice these are mostly casued by applying latin alphabet filters to non-latin text. Working on it!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

sonatoki-0.9.1.tar.gz (141.6 kB view details)

Uploaded Source

Built Distribution

sonatoki-0.9.1-py3-none-any.whl (134.2 kB view details)

Uploaded Python 3

File details

Details for the file sonatoki-0.9.1.tar.gz.

File metadata

  • Download URL: sonatoki-0.9.1.tar.gz
  • Upload date:
  • Size: 141.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: pdm/2.20.0 CPython/3.10.12 Linux/6.5.0-1025-azure

File hashes

Hashes for sonatoki-0.9.1.tar.gz
Algorithm Hash digest
SHA256 b6dbdc06648ad8b159ac4ffa6da46fe8dc60a81e281e4eb368ff1eb525d7df1a
MD5 6aeaebbf24463c7bdaca172bc3da62f5
BLAKE2b-256 9d290bcbbc746a379ff11a4b59ddfd192a965cd4ba026abd799566d20b0c1194

See more details on using hashes here.

File details

Details for the file sonatoki-0.9.1-py3-none-any.whl.

File metadata

  • Download URL: sonatoki-0.9.1-py3-none-any.whl
  • Upload date:
  • Size: 134.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: pdm/2.20.0 CPython/3.10.12 Linux/6.5.0-1025-azure

File hashes

Hashes for sonatoki-0.9.1-py3-none-any.whl
Algorithm Hash digest
SHA256 96dcf924c3d1906542544762b17520073262f88b943d57414f9dd50be3a3faa7
MD5 88e170c7ffab02b140839352e7652667
BLAKE2b-256 17a964bf78a1ce34f83df4a7fcc80be84bf31bd91547c77e16f2b98b1d97c49c

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page