Skip to main content

Python client for oka repository

Project description

test codecov pypi Python version license: GPL v3

DOI arXiv User Manual API Documentation

oka - Client for OKA repository

Latest version as a package

Current code

User manual

API documentation

Overview

oka is a client for Oka repository. It also provides utilities to process data.

Installation

...as a standalone lib

# Set up a virtualenv. 
python3 -m venv venv
source venv/bin/activate

# Install from PyPI...
pip install --upgrade pip
pip install -U oka
pip install -U oka[full]  # use the flag 'full' for extra functionality (recommended)

# ...or, install from updated source code.
pip install git+https://github.com/rabizao/oka

...from source

sudo apt install python3.8-venv python3.8-dev python3.8-distutils # For Debian-like systems.
git clone https://github.com/rabizao/oka
cd oka
python3.8 -m venv venv
source venv/bin/activate
pip install -e .

Usage

Hello world

from oka import Oka, generate_token, toy_df

# Create a pandas dataframe.
df = toy_df()
print(df.head())
"""
   attr1  attr2  class
0    5.1    6.4      0
1    1.1    2.5      1
2    6.1    3.6      0
3    1.1    3.5      1
4    3.1    2.5      0
"""
# Login.
token = generate_token("http://localhost:5000")
client = Oka(token, "http://localhost:5000")

# Store.
id = client.send(df)

# Store again.
id = client.send(df)
"""
Content already stored for id iJ_e4463c51904e9efb800533d25082af2a7bf77
"""

# Fetch.
df = client.get(id)

print(df.head())
"""
   attr1  attr2  class
0    5.1    6.4      0
1    1.1    2.5      1
2    6.1    3.6      0
3    1.1    3.5      1
4    3.1    2.5      0
"""

DataFrame by hand

import pandas as pd
from oka import Oka, generate_token

# Create a pandas dataframe.
df = pd.DataFrame(
    [[1, 2, "+"],
     [3, 4, "-"]],
    index=["row 1", "row 2"],
    columns=["col 1", "col 2", "class"],
)
print(df.head())
"""
       col 1  col 2 class
row 1      1      2     +
row 2      3      4     -
"""
# Login.
token = generate_token("http://localhost:5000")
client = Oka(token, "http://localhost:5000")

# Store.
id = client.send(df)

# Store again.
id = client.send(df)
"""
Content already stored for id f7_6b9deafec2562edde56bfdc573b336b55cb16
"""

# Fetch.
df = client.get(id)

print(df.head())
"""
       col 1  col 2 class
row 1      1      2     +
row 2      3      4     -
"""

Machine Learning workflow

import json

from sklearn.ensemble import RandomForestClassifier as RF

from idict import let, idict
from idict.function.classification import fit, predict
from idict.function.evaluation import split
from oka import Oka, generate_token

# Login.
token = generate_token("http://localhost:5000")
cache = Oka(token, "http://localhost:5000")
d = (
        idict.fromtoy()
        >> split
        >> let(fit, algorithm=RF, config={"n_estimators": 55}, Xin="Xtr", yin="ytr")
        >> let(predict, Xin="Xts")
        >> (lambda X: {"X2": X * X, "_history": ...})
        >> [cache]
)
cache.send(d)
print(json.dumps(list(d.history.keys()), indent=2))
"""
[
  "split----------------------sklearn-1.0.1",
  "fit--------------------------------idict",
  "predict----------------------------idict",
  "RwMG040tZc3XNoJkwkBe6A1aIUGNQ4EAQVqi.uAl"
]
"""
d.show()
"""
{
    "X2": "«{'attr1': {0: 26.009999999999998, 1: 1.2100000000000002, 2: 37.209999999999994, 3: 1.2100000000000002, 4: 9.610000000000001, 5: 22.090000000000003, 6: 82.80999999999999, 7: 68.89000000000001, 8: 82.80999999999999, 9: 6.25, 10: 50.41, 11: 0.010000000000000002, 12: 4.41, 13: 0.010000000000000002, 14: 26.009999999999998, 15: 967.21, 16: 1.2100000000000002, 17: 4.840000000000001, 18: 9.610000000000001, 19: 1.2100000000000002}, 'attr2': {0: 40.96000000000001, 1: 6.25, 2: 12.96, 3: 12.25, 4: 6.25, 5: 24.010000000000005, 6: 12.25, 7: 8.41, 8: 51.84, 9: 20.25, 10: 43.559999999999995, 11: 18.49, 12: 0.010000000000000002, 13: 16.0, 14: 20.25, 15: 22.090000000000003, 16: 10.240000000000002, 17: 72.25, 18: 6.25, 19: 72.25}}»",
    "_history": "split----------------------sklearn-1.0.1 fit--------------------------------idict predict----------------------------idict RwMG040tZc3XNoJkwkBe6A1aIUGNQ4EAQVqi.uAl",
    "z": "«[1 0 1 0 1 1 0]»",
    "model": "RandomForestClassifier(n_estimators=55)",
    "Xtr": "«{'attr1': {8: 9.1, 2: 6.1, 18: 3.1, 7: 8.3, 17: 2.2, 4: 3.1, 19: 1.1, 5: 4.7, 12: 2.1, 16: 1.1, 3: 1.1, 1: 1.1, 11: 0.1}, 'attr2': {8: 7.2, 2: 3.6, 18: 2.5, 7: 2.9, 17: 8.5, 4: 2.5, 19: 8.5, 5: 4.9, 12: 0.1, 16: 3.2, 3: 3.5, 1: 2.5, 11: 4.3}}»",
    "ytr": "«[0 0 0 1 1 0 1 1 0 0 1 1 1]»",
    "Xts": "«{'attr1': {13: 0.1, 6: 9.1, 9: 2.5, 10: 7.1, 0: 5.1, 14: 5.1, 15: 31.1}, 'attr2': {13: 4.0, 6: 3.5, 9: 4.5, 10: 6.6, 0: 6.4, 14: 4.5, 15: 4.7}}»",
    "yts": "«[1 0 1 0 0 0 1]»",
    "X": "«{'attr1': {0: 5.1, 1: 1.1, 2: 6.1, 3: 1.1, 4: 3.1, 5: 4.7, 6: 9.1, 7: 8.3, 8: 9.1, 9: 2.5, 10: 7.1, 11: 0.1, 12: 2.1, 13: 0.1, 14: 5.1, 15: 31.1, 16: 1.1, 17: 2.2, 18: 3.1, 19: 1.1}, 'attr2': {0: 6.4, 1: 2.5, 2: 3.6, 3: 3.5, 4: 2.5, 5: 4.9, 6: 3.5, 7: 2.9, 8: 7.2, 9: 4.5, 10: 6.6, 11: 4.3, 12: 0.1, 13: 4.0, 14: 4.5, 15: 4.7, 16: 3.2, 17: 8.5, 18: 2.5, 19: 8.5}}»",
    "y": "«[0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1]»",
    "_id": "FIu2PlbixU7uQkPWFo.IpLQs7rSUemOpw2Gjf2fM",
    "_ids": {
        "X2": "AAg2Q7L1VlJpnJYTjdGVqgiJCpKNQ4EAQVqi.uAl",
        "_history": "ofEb.nRSYsUsgAnnyp4KYFovZaUOV6000sv....-",
        "z": "IJgjuSW2NDgVJhAXIunMjCFq--5ToIdtR9zF3wAU",
        "model": "BbsA4NV-O2xhlYO0ocBOMqgQUs78Am3L99ifg.9h",
        "Xtr": "rE3WM8cu5e1XrvmPdKRVr9DJ0-L9mlearj-1.0.2",
        "ytr": "buTGORt9MCqtqUmtRAaCUY1dot1tklearn-1.0.3",
        "Xts": "6Z42mf0rylBy0cJV0tvi7naILYOsklearn-1.0.4",
        "yts": "ewHi412m.1H0.kMKdlQ-tCib7s4tklearn-1.0.5",
        "X": "-2_3606060c77dc8f0807deec66fac5120578d0e (content: 34_1738c83af436029507def2710bc5125f58d0e)",
        "y": "o._080bf2a35d8ab275d4050dfd02b939104feac (content: S0_b6360d62ccafa275d4051dfd02b939104feac)"
    }
}
"""
print(d.z)
"""
[1 0 1 0 1 1 0]
"""
d.show()
"""
{
    "X2": "«{'attr1': {0: 26.009999999999998, 1: 1.2100000000000002, 2: 37.209999999999994, 3: 1.2100000000000002, 4: 9.610000000000001, 5: 22.090000000000003, 6: 82.80999999999999, 7: 68.89000000000001, 8: 82.80999999999999, 9: 6.25, 10: 50.41, 11: 0.010000000000000002, 12: 4.41, 13: 0.010000000000000002, 14: 26.009999999999998, 15: 967.21, 16: 1.2100000000000002, 17: 4.840000000000001, 18: 9.610000000000001, 19: 1.2100000000000002}, 'attr2': {0: 40.96000000000001, 1: 6.25, 2: 12.96, 3: 12.25, 4: 6.25, 5: 24.010000000000005, 6: 12.25, 7: 8.41, 8: 51.84, 9: 20.25, 10: 43.559999999999995, 11: 18.49, 12: 0.010000000000000002, 13: 16.0, 14: 20.25, 15: 22.090000000000003, 16: 10.240000000000002, 17: 72.25, 18: 6.25, 19: 72.25}}»",
    "_history": "split----------------------sklearn-1.0.1 fit--------------------------------idict predict----------------------------idict RwMG040tZc3XNoJkwkBe6A1aIUGNQ4EAQVqi.uAl",
    "z": "«[1 0 1 0 1 1 0]»",
    "model": "RandomForestClassifier(n_estimators=55)",
    "Xtr": "«{'attr1': {8: 9.1, 2: 6.1, 18: 3.1, 7: 8.3, 17: 2.2, 4: 3.1, 19: 1.1, 5: 4.7, 12: 2.1, 16: 1.1, 3: 1.1, 1: 1.1, 11: 0.1}, 'attr2': {8: 7.2, 2: 3.6, 18: 2.5, 7: 2.9, 17: 8.5, 4: 2.5, 19: 8.5, 5: 4.9, 12: 0.1, 16: 3.2, 3: 3.5, 1: 2.5, 11: 4.3}}»",
    "ytr": "«[0 0 0 1 1 0 1 1 0 0 1 1 1]»",
    "Xts": "«{'attr1': {13: 0.1, 6: 9.1, 9: 2.5, 10: 7.1, 0: 5.1, 14: 5.1, 15: 31.1}, 'attr2': {13: 4.0, 6: 3.5, 9: 4.5, 10: 6.6, 0: 6.4, 14: 4.5, 15: 4.7}}»",
    "yts": "«[1 0 1 0 0 0 1]»",
    "X": "«{'attr1': {0: 5.1, 1: 1.1, 2: 6.1, 3: 1.1, 4: 3.1, 5: 4.7, 6: 9.1, 7: 8.3, 8: 9.1, 9: 2.5, 10: 7.1, 11: 0.1, 12: 2.1, 13: 0.1, 14: 5.1, 15: 31.1, 16: 1.1, 17: 2.2, 18: 3.1, 19: 1.1}, 'attr2': {0: 6.4, 1: 2.5, 2: 3.6, 3: 3.5, 4: 2.5, 5: 4.9, 6: 3.5, 7: 2.9, 8: 7.2, 9: 4.5, 10: 6.6, 11: 4.3, 12: 0.1, 13: 4.0, 14: 4.5, 15: 4.7, 16: 3.2, 17: 8.5, 18: 2.5, 19: 8.5}}»",
    "y": "«[0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1]»",
    "_id": "FIu2PlbixU7uQkPWFo.IpLQs7rSUemOpw2Gjf2fM",
    "_ids": {
        "X2": "AAg2Q7L1VlJpnJYTjdGVqgiJCpKNQ4EAQVqi.uAl",
        "_history": "ofEb.nRSYsUsgAnnyp4KYFovZaUOV6000sv....-",
        "z": "IJgjuSW2NDgVJhAXIunMjCFq--5ToIdtR9zF3wAU",
        "model": "BbsA4NV-O2xhlYO0ocBOMqgQUs78Am3L99ifg.9h",
        "Xtr": "rE3WM8cu5e1XrvmPdKRVr9DJ0-L9mlearj-1.0.2",
        "ytr": "buTGORt9MCqtqUmtRAaCUY1dot1tklearn-1.0.3",
        "Xts": "6Z42mf0rylBy0cJV0tvi7naILYOsklearn-1.0.4",
        "yts": "ewHi412m.1H0.kMKdlQ-tCib7s4tklearn-1.0.5",
        "X": "-2_3606060c77dc8f0807deec66fac5120578d0e (content: 34_1738c83af436029507def2710bc5125f58d0e)",
        "y": "o._080bf2a35d8ab275d4050dfd02b939104feac (content: S0_b6360d62ccafa275d4051dfd02b939104feac)"
    }
}
"""
# A field '_' means this function is a noop process triggered only once by accessing one of the other provided fields."
d >>= (lambda _, X2, y: print("Some logging/printing that doesn't affect data...\nX²=\n", X2[:3]))
d.show()
"""
{
    "X2": "→(X2 y _)",
    "y": "→(X2 y _)",
    "_history": "split----------------------sklearn-1.0.1 fit--------------------------------idict predict----------------------------idict RwMG040tZc3XNoJkwkBe6A1aIUGNQ4EAQVqi.uAl",
    "z": "«[1 0 1 0 1 1 0]»",
    "model": "RandomForestClassifier(n_estimators=55)",
    "Xtr": "«{'attr1': {8: 9.1, 2: 6.1, 18: 3.1, 7: 8.3, 17: 2.2, 4: 3.1, 19: 1.1, 5: 4.7, 12: 2.1, 16: 1.1, 3: 1.1, 1: 1.1, 11: 0.1}, 'attr2': {8: 7.2, 2: 3.6, 18: 2.5, 7: 2.9, 17: 8.5, 4: 2.5, 19: 8.5, 5: 4.9, 12: 0.1, 16: 3.2, 3: 3.5, 1: 2.5, 11: 4.3}}»",
    "ytr": "«[0 0 0 1 1 0 1 1 0 0 1 1 1]»",
    "Xts": "«{'attr1': {13: 0.1, 6: 9.1, 9: 2.5, 10: 7.1, 0: 5.1, 14: 5.1, 15: 31.1}, 'attr2': {13: 4.0, 6: 3.5, 9: 4.5, 10: 6.6, 0: 6.4, 14: 4.5, 15: 4.7}}»",
    "yts": "«[1 0 1 0 0 0 1]»",
    "X": "«{'attr1': {0: 5.1, 1: 1.1, 2: 6.1, 3: 1.1, 4: 3.1, 5: 4.7, 6: 9.1, 7: 8.3, 8: 9.1, 9: 2.5, 10: 7.1, 11: 0.1, 12: 2.1, 13: 0.1, 14: 5.1, 15: 31.1, 16: 1.1, 17: 2.2, 18: 3.1, 19: 1.1}, 'attr2': {0: 6.4, 1: 2.5, 2: 3.6, 3: 3.5, 4: 2.5, 5: 4.9, 6: 3.5, 7: 2.9, 8: 7.2, 9: 4.5, 10: 6.6, 11: 4.3, 12: 0.1, 13: 4.0, 14: 4.5, 15: 4.7, 16: 3.2, 17: 8.5, 18: 2.5, 19: 8.5}}»",
    "_id": "FIu2PlbixU7uQkPWFo.IpLQs7rSUemOpw2Gjf2fM",
    "_ids": {
        "X2": "xjIshkXdp8bBhW6mM28r4BZlBLNNQ4EAQVqi.uAm",
        "y": "ofEb.nRSYsUsgAnnyp4KYFovZaUOV6000sv....-",
        "_history": "ofEb.nRSYsUsgAnnyp4KYFovZaUOV6000sv....-",
        "z": "IJgjuSW2NDgVJhAXIunMjCFq--5ToIdtR9zF3wAU",
        "model": "BbsA4NV-O2xhlYO0ocBOMqgQUs78Am3L99ifg.9h",
        "Xtr": "rE3WM8cu5e1XrvmPdKRVr9DJ0-L9mlearj-1.0.2",
        "ytr": "buTGORt9MCqtqUmtRAaCUY1dot1tklearn-1.0.3",
        "Xts": "6Z42mf0rylBy0cJV0tvi7naILYOsklearn-1.0.4",
        "yts": "ewHi412m.1H0.kMKdlQ-tCib7s4tklearn-1.0.5",
        "X": "-2_3606060c77dc8f0807deec66fac5120578d0e (content: 34_1738c83af436029507def2710bc5125f58d0e)"
    }
}
"""
print("Triggering noop function by accessing 'y'...")
print("y", d.y[:3])
"""
Triggering noop function by accessing 'y'...
Some logging/printing that doesn't affect data...
X²=
    attr1  attr2
0  26.01  40.96
1   1.21   6.25
2  37.21  12.96
y [0 1 0]
"""
d.show()
"""
{
    "X2": "     attr1  attr2\n0    26.01  40.96\n1     1.21   6.25\n2    37.21  12.96\n3     1.21  12.25\n4     9.61   6.25\n5    22.09  24.01\n6    82.81  12.25\n7    68.89   8.41\n8    82.81  51.84\n9     6.25  20.25\n10   50.41  43.56\n11    0.01  18.49\n12    4.41   0.01\n13    0.01  16.00\n14   26.01  20.25\n15  967.21  22.09\n16    1.21  10.24\n17    4.84  72.25\n18    9.61   6.25\n19    1.21  72.25",
    "y": "«[0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1]»",
    "_history": "split----------------------sklearn-1.0.1 fit--------------------------------idict predict----------------------------idict RwMG040tZc3XNoJkwkBe6A1aIUGNQ4EAQVqi.uAl",
    "z": "«[1 0 1 0 1 1 0]»",
    "model": "RandomForestClassifier(n_estimators=55)",
    "Xtr": "«{'attr1': {8: 9.1, 2: 6.1, 18: 3.1, 7: 8.3, 17: 2.2, 4: 3.1, 19: 1.1, 5: 4.7, 12: 2.1, 16: 1.1, 3: 1.1, 1: 1.1, 11: 0.1}, 'attr2': {8: 7.2, 2: 3.6, 18: 2.5, 7: 2.9, 17: 8.5, 4: 2.5, 19: 8.5, 5: 4.9, 12: 0.1, 16: 3.2, 3: 3.5, 1: 2.5, 11: 4.3}}»",
    "ytr": "«[0 0 0 1 1 0 1 1 0 0 1 1 1]»",
    "Xts": "«{'attr1': {13: 0.1, 6: 9.1, 9: 2.5, 10: 7.1, 0: 5.1, 14: 5.1, 15: 31.1}, 'attr2': {13: 4.0, 6: 3.5, 9: 4.5, 10: 6.6, 0: 6.4, 14: 4.5, 15: 4.7}}»",
    "yts": "«[1 0 1 0 0 0 1]»",
    "X": "«{'attr1': {0: 5.1, 1: 1.1, 2: 6.1, 3: 1.1, 4: 3.1, 5: 4.7, 6: 9.1, 7: 8.3, 8: 9.1, 9: 2.5, 10: 7.1, 11: 0.1, 12: 2.1, 13: 0.1, 14: 5.1, 15: 31.1, 16: 1.1, 17: 2.2, 18: 3.1, 19: 1.1}, 'attr2': {0: 6.4, 1: 2.5, 2: 3.6, 3: 3.5, 4: 2.5, 5: 4.9, 6: 3.5, 7: 2.9, 8: 7.2, 9: 4.5, 10: 6.6, 11: 4.3, 12: 0.1, 13: 4.0, 14: 4.5, 15: 4.7, 16: 3.2, 17: 8.5, 18: 2.5, 19: 8.5}}»",
    "_id": "FIu2PlbixU7uQkPWFo.IpLQs7rSUemOpw2Gjf2fM",
    "_ids": {
        "X2": "xjIshkXdp8bBhW6mM28r4BZlBLNNQ4EAQVqi.uAm",
        "y": "ofEb.nRSYsUsgAnnyp4KYFovZaUOV6000sv....-",
        "_history": "ofEb.nRSYsUsgAnnyp4KYFovZaUOV6000sv....-",
        "z": "IJgjuSW2NDgVJhAXIunMjCFq--5ToIdtR9zF3wAU",
        "model": "BbsA4NV-O2xhlYO0ocBOMqgQUs78Am3L99ifg.9h",
        "Xtr": "rE3WM8cu5e1XrvmPdKRVr9DJ0-L9mlearj-1.0.2",
        "ytr": "buTGORt9MCqtqUmtRAaCUY1dot1tklearn-1.0.3",
        "Xts": "6Z42mf0rylBy0cJV0tvi7naILYOsklearn-1.0.4",
        "yts": "ewHi412m.1H0.kMKdlQ-tCib7s4tklearn-1.0.5",
        "X": "-2_3606060c77dc8f0807deec66fac5120578d0e (content: 34_1738c83af436029507def2710bc5125f58d0e)"
    }
}
"""
# The same workflow will not be processed again if the same cache is used.
d = (
        idict.fromtoy()
        >> split
        >> let(fit, algorithm=RF, config={"n_estimators": 55}, Xin="Xtr", yin="ytr")
        >> let(predict, Xin="Xts")
        >> (lambda X: {"X2": X * X})
        >> (lambda _, X2, y: print("Some logging/printing that doesn't affect data...", X2.head()))
        >> [cache]
)
d.show()
"""
{
    "X2": "→(↑ X2→(X) y _)",
    "y": "→(↑ X2→(X) y _)",
    "z": "→(↑ input Xin yout version Xts→(input config X y) model→(algorithm config Xin yin output version Xtr→(input config X y) ytr→(input config X y)))",
    "_history": "split----------------------sklearn-1.0.1 fit--------------------------------idict predict----------------------------idict",
    "model": "→(↑ algorithm config Xin yin output version Xtr→(input config X y) ytr→(input config X y))",
    "Xtr": "→(↑ input config X y)",
    "ytr": "→(↑ input config X y)",
    "Xts": "→(↑ input config X y)",
    "yts": "→(↑ input config X y)",
    "X": "«{'attr1': {0: 5.1, 1: 1.1, 2: 6.1, 3: 1.1, 4: 3.1, 5: 4.7, 6: 9.1, 7: 8.3, 8: 9.1, 9: 2.5, 10: 7.1, 11: 0.1, 12: 2.1, 13: 0.1, 14: 5.1, 15: 31.1, 16: 1.1, 17: 2.2, 18: 3.1, 19: 1.1}, 'attr2': {0: 6.4, 1: 2.5, 2: 3.6, 3: 3.5, 4: 2.5, 5: 4.9, 6: 3.5, 7: 2.9, 8: 7.2, 9: 4.5, 10: 6.6, 11: 4.3, 12: 0.1, 13: 4.0, 14: 4.5, 15: 4.7, 16: 3.2, 17: 8.5, 18: 2.5, 19: 8.5}}»",
    "_id": "GE3wknZKmqOtdeontIN82acSc8CMSbrJ8-Y8UcBU",
    "_ids": {
        "X2": "1qGvgoXsfVDOqj.kW6cUGyymvKIFsWgUsRJ7EFWu",
        "y": "ofEb.nRSYsUsgAnnyp4KYFovZaUOV6000sv....-",
        "z": "IJgjuSW2NDgVJhAXIunMjCFq--5ToIdtR9zF3wAU",
        "_history": "ofEb.nRSYsUsgAnnyp4KYFovZaUOV6000sv....-",
        "model": "BbsA4NV-O2xhlYO0ocBOMqgQUs78Am3L99ifg.9h",
        "Xtr": "rE3WM8cu5e1XrvmPdKRVr9DJ0-L9mlearj-1.0.2",
        "ytr": "buTGORt9MCqtqUmtRAaCUY1dot1tklearn-1.0.3",
        "Xts": "6Z42mf0rylBy0cJV0tvi7naILYOsklearn-1.0.4",
        "yts": "ewHi412m.1H0.kMKdlQ-tCib7s4tklearn-1.0.5",
        "X": "-2_3606060c77dc8f0807deec66fac5120578d0e (content: 34_1738c83af436029507def2710bc5125f58d0e)"
    }
}
"""
cache.send(d)

d = cache.get(d.id)
d.show()
"""
{
    "X2": "→(↑)",
    "y": "→(↑)",
    "z": "→(↑)",
    "_history": "→(↑)",
    "model": "→(↑)",
    "Xtr": "→(↑)",
    "ytr": "→(↑)",
    "Xts": "→(↑)",
    "yts": "→(↑)",
    "X": "→(↑)",
    "_id": "GE3wknZKmqOtdeontIN82acSc8CMSbrJ8-Y8UcBU",
    "_ids": {
        "X2": "1qGvgoXsfVDOqj.kW6cUGyymvKIFsWgUsRJ7EFWu",
        "y": "ofEb.nRSYsUsgAnnyp4KYFovZaUOV6000sv....-",
        "z": "IJgjuSW2NDgVJhAXIunMjCFq--5ToIdtR9zF3wAU",
        "_history": "ofEb.nRSYsUsgAnnyp4KYFovZaUOV6000sv....-",
        "model": "BbsA4NV-O2xhlYO0ocBOMqgQUs78Am3L99ifg.9h",
        "Xtr": "rE3WM8cu5e1XrvmPdKRVr9DJ0-L9mlearj-1.0.2",
        "ytr": "buTGORt9MCqtqUmtRAaCUY1dot1tklearn-1.0.3",
        "Xts": "6Z42mf0rylBy0cJV0tvi7naILYOsklearn-1.0.4",
        "yts": "ewHi412m.1H0.kMKdlQ-tCib7s4tklearn-1.0.5",
        "X": "-2_3606060c77dc8f0807deec66fac5120578d0e (content: 34_1738c83af436029507def2710bc5125f58d0e)"
    }
}
"""

More info

Aside from the papers on identification and on similarity (not ready yet), the PyPI package and GitHub repository,

A lower level perspective is provided in the API documentation.

Grants

This work was supported by Fapesp under supervision of Prof. André C. P. L. F. de Carvalho at CEPID-CeMEAI (Grants 2013/07375-0 – 2019/01735-0).

.>>>>>>>>> outros <<<<<<<<<<<.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

oka-0.211202.3.tar.gz (36.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

oka-0.211202.3-py3-none-any.whl (24.1 kB view details)

Uploaded Python 3

File details

Details for the file oka-0.211202.3.tar.gz.

File metadata

  • Download URL: oka-0.211202.3.tar.gz
  • Upload date:
  • Size: 36.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.1.7 CPython/3.8.10 Linux/5.4.0-89-generic

File hashes

Hashes for oka-0.211202.3.tar.gz
Algorithm Hash digest
SHA256 a1d6237409270514f348988f56cb8af03763972da4e6ab37b05da0c3d3da309f
MD5 be0870e738f5d2b44db5c80d543203fa
BLAKE2b-256 56f6663d0283b605c592df82c647e659995bcfab732022b0c237d8998f6ace0f

See more details on using hashes here.

File details

Details for the file oka-0.211202.3-py3-none-any.whl.

File metadata

  • Download URL: oka-0.211202.3-py3-none-any.whl
  • Upload date:
  • Size: 24.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.1.7 CPython/3.8.10 Linux/5.4.0-89-generic

File hashes

Hashes for oka-0.211202.3-py3-none-any.whl
Algorithm Hash digest
SHA256 f066fc7b94765cc366fd4295d51edf75277b2966ea0099964412b803ac9e95b0
MD5 b8a4a0019991dc8aeb6b8457b4a703c7
BLAKE2b-256 6fea03fccef74b7aa81ea3135a453665e03c9157d0560c46eb8ef11fbf3e23df

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page