Skip to main content

Wissen Full-Text Search & Classification Engine

Project description

==================================================
Wissen: Full-Text Search and Classification Engine
==================================================

.. contents:: Table of Contents


Introduce
============

Wissen Search & Classification Engine is a simple search engine mostly written in Python and C in year 2008.

At that time, I would like to study Lucene_ earlier version with Lupy_ and CLucene_. And I also had maden my own search engine for excercise.

Its file format, numeric compressing algorithm, indexing process are quiet similar with Lucene. But querying and result-fetching parts was built from my imagination. As a result it's entirely unorthodox and possibly inefficient. Wissen's searching mechanism is similar with DNA-RNA-Protein working model translated into 'Index File-Temporary Small Replication Buffer-Query Result'.

* Every searcher (Cell) has a single index file handlers group (DNA group in nuclear)
* Thread has multiple small buffer (RNA) for replicating index as needed part
* Query class (Ribosome) creates query result (Protein) by synthesising buffers' inforamtion (RNAs)
* Repeat from 2nd if expected more results

.. _Lucene: https://lucene.apache.org/core/
.. _Lupy: https://pypi.python.org/pypi/Lupy
.. _CLucene: http://clucene.sourceforge.net/


Installation
=============

Wissen contains C extension, so need C compiler.

.. code:: bash

pip install wissen

On posix, it might be required some packages,

.. code:: bash

apt-get install gcc zlib1g-dev


Quick Start
============

All field text type should be str or utf-8 encoded bytes in Python 3.x, and unicode or utf-8 encoded string in Python 2.7. Otherwise encoding should be specified.

Indexing and Searching
-------------------------

Here's an example indexing only one document.

.. code:: python

import wissen

# indexing
analyzer = wissen.standard_analyzer (max_term = 3000)
col = wissen.collection ("./col", wissen.CREATE, analyzer)
indexer = col.get_indexer ()

song = "violin sonata in c k.301"
composer = u"wolfgang amadeus mozart"
birth = 1756
home = "50.665629/8.048906" # Lattitude / Longitude of Salzurg
genre = "01011111" # (rock serenade jazz piano symphony opera quartet sonata)

document = wissen.document ()

# object to return, any object serializable by pickle
document.content ([song, composer])

# text content to generating auto snippet by given query terms
document.snippet (song)

# add searchable fields
document.field ("default", song, wissen.TEXT)
document.field ("composer", composer, wissen.TEXT)
document.field ("birth", birth, wissen.INT16)
document.field ("genre", genre, wissen.BIT8)
document.field ("home", home, wissen.COORD)

indexer.add_document (document)
indexer.close ()

# searching
analyzer = wissen.standard_analyzer (max_term = 8)
col = wissen.collection ("./col", wissen.READ, analyzer)
searcher = col.get_searcher ()
print searcher.query (u'violin', offset = 0, fetch = 2, sort = "tfidf", summary = 30)
searcher.close ()


Result will be like this:

.. code:: python

{
'code': 200,
'time': 0,
'total': 1
'result': [
[
['violin sonata in c k.301', 'wofgang amadeus mozart'], # content
'<b>violin</b> sonata in c k.301', # auto snippet
14, 0, 0, 0 # additional info
]
],
'sorted': [None, 0],
'regex': 'violin|violins',
}


Learning and Classification
---------------------------

Here's an example guessing one of 'play golf', 'go to bed' by weather conditions.

.. code:: python

import wissen

analyzer = wissen.standard_analyzer (max_term = 3000)

# learning

mdl = wissen.model ("./mdl", wissen.CREATE, analyzer)
learner = mdl.get_learner ()

document = wissen.labeled_document ("Play Golf", "cloudy windy warm")
learner.add_document (document)
document = wissen.labeled_document ("Play Golf", "windy sunny warm")
learner.add_document (document)
document = wissen.labeled_document ("Go To Bed", "cold rainy")
learner.add_document (document)
document = wissen.labeled_document ("Go To Bed", "windy rainy warm")
learner.add_document (document)
learner.close ()

mdl = wissen.model ("./mdl", wissen.MODIFY, analyzer)
learner = mdl.get_learner ()
learner.listbydf () # show all terms with DF (Document Frequency)
learner.close ()

mdl = wissen.model ("./mdl", wissen.MODIFY, analyzer)
learner = mdl.get_learner ()
learner.build (dfmin = 2) # build corpus DF >= 2
learner.close ()

mdl = wissen.model ("./mdl", wissen.MODIFY, analyzer)
learner = mdl.get_learner ()
learner.train (
cl_for = wissen.ALL, # for which classifier
selector = wissen.CHI2, # feature selecting method
select = 0.99, # how many features?
orderby = wissen.MAX, # feature ranking by what?
dfmin = 2 # exclude DF < 2
)
learner.close ()


# gusessing

mdl = wissen.model ("./mdl", wissen.READ, analyzer)
classifier = mdl.get_classifier ()
print classifier.guess ("rainy cold", cl = wissen.NAIVEBAYES)
print classifier.guess ("rainy cold", cl = wissen.FEATUREVOTE)
print classifier.guess ("rainy cold", cl = wissen.TFIDF)
print classifier.guess ("rainy cold", cl = wissen.SIMILARITY)
print classifier.guess ("rainy cold", cl = wissen.ROCCHIO)
print classifier.guess ("rainy cold", cl = wissen.MULTIPATH)
print classifier.guess ("rainy cold", cl = wissen.META)
classifier.close ()


Result will be like this:

.. code:: python

{
'code': 200,
'total': 1,
'time': 5,
'result': [('Go To Bed', 1.0)],
'classifier': 'meta'
}


Limitation
==============

Before you test Wissen, you should know some limitation.

- Wissen search cannot sort by string type field, but can by int/bit/coord types and TFIDF ranking.

- Wissen classification doesn't have purpose for accuracy but realtime (means within 1 second) guessing performance. So I used relatvely simple and fast classification algorithms. If you need accuracy, it's not fit to you.


Configure Wissen
==================

When indexing/learing it's not necessory to configure, but searching/guessing it should be configure. The reason why Wissen allocates memory per thread for searching and classifying on initializing.

.. code:: python

wissen.configure (
numthread,
logger,
io_buf_size = 4096,
mem_limit = 256
)


- numthread: number of threads which access to Wissen collections and models. if set to 8, you can open multiple collections (or models) and access with 8 threads. If 9th thread try to access to wissen, it will raise error

- logger: *see next chapter*

- io_buf_size = 4096: Bytes size of flash buffer for repliacting index files

- mem_limit = 256: Memory limit per a thread, but it's not absolute. It can be over during calculation if need, but when calcuation has been finished, would return memory ASAP.


Finally when your app is terminated, call shutdown.

.. code:: python

wissen.shutdown ()


Logger
========

.. code:: python

from wissen.lib import logger

logger.screen_logger ()

# it will create file '/var/log.wissen.log', and rotated by daily base
logger.rotate_logger ("/var/log", "wissen", "daily")


Standard Analyzer
====================

Analyzer is needed by TEXT, TERM types.

Basic Usage is:

.. code:: python

analyzer = wissen.standard_analyzer (
max_term = 8,
numthread = 1,
ngram = True or False,
stem_level = 0, 1 or 2 (2 is only applied to English Language),
make_lower_case = True or False,
stopwords_case_sensitive = True or False,
ngram_no_space = True or False,
strip_html = True or False,
contains_alpha_only = True or False,
stopwords = [word,...]
)

- stem_level: 0 and 1, especially 'en' language has level 2 for hard stemming

- make_lower_case: make lower case for every text

- stopwords_case_sensitive: it will work if make_lower_case is False

- ngram_no_space: if False, '泣斬 馬謖' will be tokenized to _泣, 泣斬, 斬\_, _馬, 馬謖, 謖\_. But if True, addtional bi-gram 斬馬 will be created between 斬\_ and _馬.

- strip_html

- contains_alpha_only: remove term which doesn't contain alphabet, this option is useful for full-text training in some cases

- stopwords: Wissen has only English stopwords list, You can use change custom stopwords. Stopwords sould be unicode or utf8 encoded bytes

Wissen has some kind of stemmers and n-gram methods for international languages and can use them by this way:

.. code:: python

analyzer = standard_analyzer (ngram = True, stem_level = 1)
col = wissen.collection ("./col", wissen.CREATE, analyzer)
indexer = col.get_indexer ()
document.field ("default", song, wissen.TEXT, lang = "en")


Implemented Stemmers
---------------------

Except English stemmer, all stemmers can be obtained at `IR Multilingual Resources at UniNE`__.

- ar: Arabic
- de: German
- en: English
- es: Spanish
- fi: Finnish
- fr: French
- hu: Hungarian
- it: Italian
- pt: Portuguese
- sv: Swedish

.. __: http://members.unine.ch/jacques.savoy/clef/index.html


Bi-Gram Index
----------------

If ngram is set to True, these languages will be indexed with bi-gram.

- cn: Chinese
- ja: Japanese
- ko: Korean

Also note that if word contains only alphabet, will be used English stemmer.


Tri-Gram Index
---------------

The other languages will be used English stemmer if all spell is Alphabet. And if ngram is set to True, will be indexed with tri-gram if word has multibytes.

**Methods Spec**

- analyzer.index (document, lang)
- analyzer.freq (document, lang)
- analyzer.stem (document, lang)
- analyzer.count_stopwords (document, lang)


Collection
==================

Collection manages index files, segments and properties.

.. code:: python

col = wissen.collection (
indexdir = [dirs],
mode = [ CREATE | READ | APPEND ],
analyzer = None,
logger = None
)

- indexdir: path or list of path for using multiple disks efficiently
- mode
- analyzer
- logger: # if logger configured by wissen.configure, it's not necessary

Collection has 2 major class: indexer and searcher.



Indexer
---------

For searching documents, it's necessary to indexing text to build Inverted Index for fast term query.

.. code:: python

indexer = col.get_indexer (
max_segments = int,
force_merge = True or False,
max_memory = 10000000 (10Mb),
optimize = True or False
)

- max_segments: maximum number of segments of index, if it's over, segments will be merged. also note during indexing, segments will be created 3 times of max_segments and when called index.close (), automatically try to merge until segemtns is proper numbers

- force_merge: When called index.close (), forcely try to merge to a single segment. But it's failed if too big index - on 32bit OS > 2GB, 64bit > 10 GB

- max_memory: if it's over, created new segment on indexing

- optimize: When called index.close (), segments will be merged by optimal number as possible


For add docuemtn to indexer, create document object:

.. code:: python

document = wissen.document ()

Wissen handle 3 objects as completly different objects between no relationship

- returning content
- snippet generating field
- searcherble fields


**Returning Content**

Wissen serialize returning contents by pickle, so you can set any objects pickle serializable.

.. code:: python

document.content ({"userid": "hansroh", "preference": {"notification": "email", ...}})

or

document.content ([32768, "This is smaple ..."])


**Snippet Generating Field**

This field should be unicode/utf8 encoded bytes.

.. code:: python

document.snippet ("This is sample...")


**Searchable Fields**

document also recieve searchable fields:

.. code:: python

document.field (name, value, ftype = wissen.TEXT, lang = "un", encoding = None)

document.field ("default", "violin sonata in c k.301", wissen.TEXT, "en")
document.field ("composer", "wolfgang amadeus mozart", wissen.TEXT, "en")
document.field ("lastname", "mozart", wissen.STRING)
document.field ("birth", 1756, wissen.INT16)
document.field ("genre", "01011111", wissen.BIT8)
document.field ("home", "50.665629/8.048906", wissen.COORD6)


- name: if 'default', this field will be searched by simple string, or use 'name:query_text'
- value: unicode/utf8 encode text, or should give encoding arg.
- ftype: *see below*
- encoding: give like 'iso8859-1' if value is not unicode/utf8
- lang: language code for standard_analyzer, "un" (unknown) is default

Avalible Field types are:

- TEXT: analyzable full-text, result-not-sortable

- TERM: analyzable full-text but position data will not be indexed as result can't search phrase, result-not-sortable

- STRING: exactly string match like nation codes, result-not-sortable

- LIST: comma seperated STRING, result-not-sortable

- COORDn, n=4,6,8 decimal precision: comma seperated string 'latitude,longititude', latitude and longititude sould be float type range -90 ~ 90, -180 ~ 180. n is precision of coordinates. n=4 is 10m radius precision, 6 is 1m and 8 is 10cm. result-sortable

- BITn, n=8,16,24,32,40,48,56,64: bitwise operation, bit makred string required by n, result-sortable

- INTn, n=8,16,24,32,40,48,56,64: range, int required, result-sortable


Repeat add_document as you need and close indexer.

.. code:: python

for ...:
document = wissen.document ()
...
indexer.add_document (document)
indexer.close ()

If searchers using this collection runs with another process or thread, searcher automatically reloaded within a few seconds for applying changed index.


Searcher
---------

For running searcher, you should wissen.configure () first and creat searcher.

.. code:: python

searcher = col.get_searcher (
max_result = 2000,
num_query_cache = 200
)

- max_result: max returned number of searching results. default 2000, if set to 0, unlimited results

- num_query_cache: default is 200, if over 200, removed by access time old


Query is simple:

.. code:: python

searcher.query (
qs,
offset = 0,
fetch = 10,
sort = "tfidf",
summary = 30,
lang = "un"
)

- qs: string (unicode) or utf8 encoded bytes. for detail query syntax, see below
- offset: return start position of result records
- fetch: number of records from offset
- sort: "(+-)tfidf" or "(+-)field name", field name should be int/bit type, and '-' means descending (high score/value first) and default if not specified. if sort is "", records order is reversed indexing order
- summary: number of terms for snippet
- lang: default is "un" (unknown)


For deleting indexed document:

.. code:: python

searcher.delete (qs)

All documents will be deleted immediatly. And if searchers using this collection run with another process or thread, theses searchers automatically reloaded within a few seconds.

Finally, close searcher.

.. code:: python

searcher.close ()


**Query Syntax**

- violin composer:mozart birth:1700~1800

search 'violin' in default field, 'mozart' in composer field and search range between 1700, 1800 in birth field

- violin allcomposer:wolfgang mozart

search 'violin' in default field and any terms after allcomposer will be searched in composer field

- violin -sonata birth:~1800

not contain sonata in default field

- violin -composer:mozart

not contain mozart in composer field

- violin or piano genre:00001101/all

matched all 5, 6 and 8th bits are 1. also /any or /none is available

- violin or ((piano composer:mozart) genre:00001101/any)

support unlimited priority '()' and 'or' operators

- (violin or ((allcomposer:mozart wolfgang) -amadeus)) sonata (genre:00001101/none home:50.6656,8.0489~10000)

search home location coordinate (50.6656, 8.0489) within 10 Km

- "violin sonata" genre:00001101/none home:50.6656/8.0489~10

search exaclt phrase "violin sonata"

- "violin^3 piano" -composer:"ludwig van beethoven"

search loose phrase "violin sonata" within 3 terms


Model
=============

Model manages index, train files, segments and properties.

.. code:: python

mdl = wissen.model (
indexdir = [dirs],
mode = [ CREATE | READ | MODIFY | APPEND ],
analyzer = None,
logger = None
)


Learner
---------

For building model, on Wissen, there're 3 steps need.

- Step I. Index documents to learn
- Step II. Build Corpus
- Step III. Selecting features and save trained model

**Step I. Index documents**

Learner use wissen.labeled_document, not wissen.document. And can additional searchable fields if you need. Label is name of category.

.. code:: python

learner = mdl.get_learner ()
for label, document in trainset:

labeled_document = wissen.labeled_document (label, document)
# addtional searcherble fields if you need
labeled_document.field (name, value, ftype = TEXT, lang = "un", encoding = None)
learner.add_document (labeled_document)

learner.close ()


**Step II. Building Corpus**

Document Frequency (DF) is one of major factor of classifier. Low DF is important to searching but not to classifier. One of importance part of learning is selecting valuable terms, but so low DF terms is not very helpful for classifying new document because new document has also low probablity of appearance.

So for learnig/classification efficient, it's useful to eliminate too low and too high DF terms. For example, Let's assume you index 30,000 web pages for learing and there're about 100,000 terms. If you build corpus with all terms, it takes so long time for learing. But if you remove DF < 10 and DF > 7000 terms, 75% - 80% of all terms will be removed.

.. code:: python

# reopen model with MODIFY
mdl = wissen.model (indexdir, MODIFY)
learner = mdl.get_learner ()

# show terms order by DF for examin
learner.listbydf (dfmin = 10, dfmax = 7000)

# build corpus and save
learner.build (dfmin = 10, dfmax = 7000)

As a result, corpus built with about 25,000 terms. It will take time by number of terms.


**Step III. Feature Selecting and Saving Model**

Features means most valuable terms to classify new documents. It is important understanding many/few features is not good for best result. It maybe most important to select good features for classification.

For example of my URL classification into 2 classes works show below results. Classifier is NAIVEBAYES, selector is GSS and min DF is 2. Train set is 20,000, test set is 2,000.

- features 3,000 => 82.9% matched, 73 documents is unclassified
- features 2,000 => 82.9% matched, 73 documents is unclassified
- features 1,500 => 83.4% matched, 75 documents is unclassified
- features 1,000 => 83.6% matched, 79 documents is unclassified
- features 500 => 83.1% matched, 86 documents is unclassified
- features 200 => 81.1% matched, 108 documents is unclassified
- features 50 => 76.0% matched, 155 documents is unclassified
- features 10 => 58.7% matched, 326 documents is unclassified

As results show us that over 2,000 snd under 1,000 features will be unchanged or degraded for classification quality. Also to the most classifiers, too few features increase unclassified ratio but especially to NAIVEBAYES, too many features will increase unclassified ratio cause of its calculating way.

.. code:: python

mdl = wissen.model (indexdir, MODIFY)
learner = mdl.get_learner ()

learner.train (
cl_for = [
ALL (default) | NAIVEBAYES | FEATUREVOTE |
TFIDF | SIMILARITY | ROCCHIO | MULTIPATH
],
select = number of features if value is > 1 or ratio,
selector = [
CHI2 | GSS | DF | NGL | MI | TFIDF | IG | OR |
OR4P | RS | LOR | COS | PPHI | YULE | RMI
],
orderby = [SUM | MAX | AVG],
dfmin = 0,
dfmax = 0
)
learner.close ()

- cl_for: train for which classifier, if not specified this features used default for every classifiers haven't own feature set. So train () can be called repeatly for each classifiers

- select: number of features if value is > 1 or ratio to all terms. Generally it might be not over 7,000 features for classifying web pages or news articles into 20 classes.

- selector: mathemetical term scoring alorithm to selecting features considering relation between term and term / term and label. Also DF, and term frequency (TF) etc.

- orderby: final scoring method. one of sum, max, average value

- dfmin, dfmax: In spite of it had been already removed by build(), it can be also additional removed for optimal result for specific classifier


If you remove training data for specific classifier,

.. code:: python

mdl = wissen.model (indexdir, MODIFY)
learner = mdl.get_learner ()

learner.untrain (cl_for)
learner.close ()


**Finding Best Training Options**

Generally, differnce attibutes of data set, it hard to say which options are best. It is stongly necessary number of times repeating process between train () and guess () for best result and that's not easy process.

- index ()
- build ()
- train (initial options)
- measure results with guess ()
- append additional documents, build () if need
- train (another options)
- measure results again with guess ()
- ...
- find best optiaml training options with your data set

For getting result accuracy, your pre-requisite data should be splitted into train set for tran () and test set for guess () to measure like `precision and recall`_.

For example, there were 27,000 web pages to training set and 2,700 test set for classifying to spam page or not. Total indexed terms are 199,183 and I eliminated 94% terms by DF < 30 or DF > 7000 and remains only 10,221 terms.

- F: selected features by OR(Odds Ratio) MAX
- NB: NAIVEBAYES, RO: ROCCHIO
- Numbers means: Matched % Ratio Excluding Unclassified (Unclassified Documents)

- F 7,000: NB 97.2 (1,100), RO 95.4 (50)
- F 5,000: NB 97.4 (493), RO 94.8 (69)
- F 4,000: NB 96.6 (282), RO 91.6 (96)
- F 3,000: NB 93.2 (214), RO 86.2 (151)
- F 2,000: NB 89.4 (293), RO 80.1 (281)

Which do you choice? In my case, I choose F 5,000 with ROCCHIO cause of low unclassified ratio. But if speed was more important I might choice F 3,000 with NAIVEBAYES.

Anyway everything is done, and if you has been found optimal parameters, you can optimize classifier model.

.. code:: python

mdl = wissen.model (indexdir, wissen.MODIFY, an)
learner = mdl.get_learner ()
learner.optimize ()
learner.close ()

Note that once called optimize (),

- you cannot add additional training documents
- you cannot rebuild corpus by calling build () again
- but you can still call train () any time

The reason why when low/high DF terms are eliminated by optimize (), related index files will be also shrinked unrecoverably for performance. Then if these works are needed, you should do from step I again.

If you don't do optimize it make SIMILARITY and ROCCHIO classifiers inefficient (also it will be NOT influence to NAIVEBAYES, TFDIF, FEATUREVOTE classifiers). But you think it's more important retraining regulary rather than speed performance, you should not optimize.

.. _`precision and recall`: https://en.wikipedia.org/wiki/Precision_and_recall


**Feature Selecting Methods**

- CHI2 = Chi Square Statistic
- GSS = GSS Coefficient
- DF = Document Frequency
- CF = Category Frequency
- NGL = NGL
- MI = Mutual Information
- TFIDF = Term Frequecy - Inverted Document Frequency
- IG = Information Gain
- OR = Odds Ratio
- OR4P = Kind of Odds Ratio(? can't remember)
- RS = Relevancy Score
- LOR = Log Odds Ratio
- COS = Cosine Similarity
- PPHI = Pearson's PHI
- YULE = Yule
- RMI = Residual Mutual Information

I personally prefer OR, IG and GSS selectors with MAX method.


Classifier
------------

Finally,

.. code:: python

classifier = mdl.get_classifier ()
classifier.quess (
qs,
lang = "un",
cl = [
NAIVEBAYES (Default) | FEATUREVOTE | ROCCHIO |
TFIDF | SIMILARITY | META | MULTIPATH
],
top = 0,
cond = ""
)

classifier.cluster (
qs,
lang = "un"
)

classifier.close ()

- qs: full text stream to classify

- lang

- cl: which classifer, META is default

- top: how many high scored classified results, default is 0, means high scored result(s) only

- cond: conditional document selecting query. Some classifier execute calculating with lots of documents like ROCCHIO and SIMILARITY, so it's useful shrinking number of documents. This only work when you put additional searchable fields using labeled_document.field (...).

**Implemented Classifiers**

- NAIVEBAYES: Naive Bayes Probablility, default guessing
- FEATUREVOTE: Feature Voting Classifier
- ROCCHIO: Rocchio Classifier
- TFIDF: Max TDIDF Score
- SIMILARITY: Max Cosine Similarity
- MULTIPATH: Experimental Multi Path Classifier, terms of classifying document will be clustered into multiple sets by co-word frequency before guessing
- META: merging and decide with multiple results guessed by NAIVEBAYES, FEATUREVOTE, ROCCHIO Classifiers

If you need speed most of all, NAIVEBAYES is a good choice. NAIVEBAYES is an old theory but it still works with very high performance at both speed and accuracy if given proper training set.

More detail for each classifier alorithm, googling please.


**Optimizing Each Classifiers**

For give some detail options to a classifier you can use setopt (classfier, option name = option value,...).


.. code:: python

classifier = mdl.get_classifier ()
classifier.setopt (wissen.ROCCHIO, topdoc = 200)

SIMILARITY, ROCCHIO classifiers basically have to compare with entire indexed document documents, but Wissen can compare with selected documents by 'topdoc' option. These number of documents will be selected by high TFIDF score for classifying performance reason. Default topdoc value is 100. If you set to 0, Wissen will compare with all documents have one of features at least. But on my experience, there's no critical difference except speed performance.

Currently available options are:

* ALL

- verbose = False

* ROCCHIO

- topdoc = 100

* MULTIPATH

+ subcl = [ FEATUREVOTE (default) | NAIVEBAYES | ROCCHIO ]
+ scoreby = [ IG (default) | MI | OR | R ]
+ choiceby = [ AVG (default) | MIN ], when scorring between term and each terms in cluster, which do you want to use value
+ threshold = 1.0, float value for creating new cluster and this value is measured with Information Gain and value range is somewhat different by number of training documents.


Document Cluster
-----------------

TODO

.. code:: python

cluster = mdl.get_dcluster ()


Term Cluster
-------------

TODO

.. code:: python

cluster = mdl.get_tcluster ()



Handling Multiple Searchers & Classifiers
===========================================

In case of creating multiple searchers and classifers, wissen.task might be useful.
Here's a script named 'config.py'

.. code:: python

import wissen
from wissen.lib import logger

def start_wissen (numthreads, logger):
wissen.configure (numthreads, logger)

analyzer = wissen.standard_analyzer ()
col = wissen.collection ("./data1", wissen.READ, analyzer)
wissen.assign ("data1", col.get_searcher (max_result = 2000))

analyzer = wissen.standard_analyzer (max_term = 1000, stem = 2)
mdl = wissen.model ("./data2", wissen.READ, analyzer)
wissen.assign ("data2", mdl.get_classifier ())

The first argument of assign () is alias for searcher or classifier.

If you call config.start_wissen () at any script, you can just import wissen and use it at another python scripts.

.. code:: python

import wissen

wissen.query ("data1", "mozart sonatas")
wissen.guess ("data2", "mozart sonatas")

# close and resign
wissen.close ("data1")
wissen.resign ("data1")


At the end of you app, call wissen.shutdown ()

.. code:: python

import wissen

wissen.shutdown ()


Mounting Wissen On Skitai with Saddle
======================================

**New in version 0.12.14**

You can use RESTful API with `Skitai-Saddle`_.

Copy and save below code to app.py.

.. code:: python

import os
import wissen
import skitai

pref = skitai.pref ()
pref.use_reloader = 1
pref.debug = 1

config = pref.config
config.sched = "0/15 * * * *"
config.enable_mirror = False
config.remote = "http://192.168.1.100:5000"
config.local = "http://127.0.0.1:5000"

config.enable_index = False
config.resource_dir = skitai.joinpath ('resources')

config.logpath = os.name == "posix" and '/var/log/assai' or None

skitai.mount ("/v1", wissen, "app", pref)
skitai.run (
port = 5000,
logpath = config.logpath
)

And run

.. code:: bash

python app.py -v

Here's example of client side indexer script using API.

.. code:: python

colopt = {
'data_dir': [
'models/0/books',
'models/1/books',
'models/2/books'
],
'analyzer': {
"ngram": 0,
"stem_level": 1,
"strip_html": 0,
"make_lower_case": 1
},
'indexer': {
'force_merge': 0,
'optimize': 1,
'max_memory': 10000000,
'max_segments': 10,
},
'searcher': {
'max_result': 2000,
'num_query_cache': 200
}
}

import requests
session = requests.Session ()

# check current collections
r = session.get ('http://127.0.0.1:5000/v1/').json ()
if 'books' not in r ["collections"]:
# collections dose not exist, then create
session.post ('http://127.0.0.1:5000/v1/books', colopt)

dbc = db.connect (...)
cursor = dbc.curosr ()
cursor.execute (...)

numdoc = 0
while 1:
row = cursor.fetchone ()
if not row: break
doc = wissen.document (row._id)
doc.content ({"author": row.author, "title": row.title , "abstract": row.abstract})
doc.snippet (row.abstract)
doc.field ('default', "%s %s" % (row.title, row.abstract), wissen.TEXT, 'en')
doc.field ('title', row.title, wissen.TEXT, 'en')
doc.field ('author', row.author, wissen.STRING)
doc.field ('isbn', row.isbn, wissen.STRING)
doc.field ('year', row.year, wissen.INT16)
session.post ('http://127.0.0.1:5000/v1/books/documents', doc.as_json ())
numdoc += 1
if numdoc % 1000:
session.get ('http://127.0.0.1:5000/v1/books/collection/commit')

cursor.close ()
dbc.close ()

Another APIs are:

.. code:: python

session.patch (
'http://127.0.0.1:5000/v1/books/documents/" + row._id,
doc.as_json ()
)
session.delete ('http://127.0.0.1:5000/v1/books/documents/" + row._id)

session.delete ('http://127.0.0.1:5000/v1/books/search?q=title:book")
session.get ('http://127.0.0.1:5000/v1/books/search?q=title:book")
session.get ('http://127.0.0.1:5000/v1/books/guess?q=title:book")


For more detail about API, see `app.py`_.

.. _`Skitai-Saddle`: https://pypi.python.org/pypi/skitai
.. _`app_v1.py`: https://gitlab.com/hansroh/wissen/blob/master/wissen/export/skitai/app_v1.py


Links
======

- `GitLab Repository`_
- Bug Report: `GitLab issues`_

.. _`GitLab Repository`: https://gitlab.com/hansroh/wissen
.. _`GitLab issues`: https://gitlab.com/hansroh/wissen/issues



Change Log
============

0.13

- change wissen.document method names
- fix index queue file locking

0.12

- add biword arg to standard_analyzer
- change export package name from appack to package
- add Skito-Saddle app
- fix analyzer.count_stopwords return value
- change development status to Alpha
- add wissen.assign(alias, searcher/classifier) and query(alias), guess(alias)
- fix threads count and memory allocation
- add example for Skitai-Saddle app to mannual

0.11

- fix HTML strip and segment merging etc.
- add MULTIPATH classifier
- add learner.optimize ()
- make learner.build & learner.train efficient

0.10 - change version format, remove all str*_s ()

0.9 - support Python 3.x

0.8 - change license from BSD to GPL V3

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

wissen-0.13b3.tar.gz (2.0 MB view hashes)

Uploaded source

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page