A flexible, generalized tree-based data structure.
Project description
treevalue
TreeValue
is a generalized tree-based data structure mainly developed by OpenDILab Contributors.
Almost all the operation can be supported in form of trees in a convenient way to simplify the structure processing when the calculation is tree-based.
Installation
You can simply install it with pip
command line from the official PyPI site.
pip install treevalue
For more information about installation, you can refer to the installation guide.
Documentation
The detailed documentation are hosted on https://opendilab.github.io/treevalue.
Only english version is provided now, the chinese documentation is still under development.
Quick Start
You can easily create a tree value object based on FastTreeValue
.
from treevalue import FastTreeValue
if __name__ == '__main__':
t = FastTreeValue({
'a': 1,
'b': 2.3,
'x': {
'c': 'str',
'd': [1, 2, None],
'e': b'bytes',
}
})
print(t)
The result should be
<FastTreeValue 0x7f6c7df00160 keys: ['a', 'b', 'x']>
├── 'a' --> 1
├── 'b' --> 2.3
└── 'x' --> <FastTreeValue 0x7f6c81150860 keys: ['c', 'd', 'e']>
├── 'c' --> 'str'
├── 'd' --> [1, 2, None]
└── 'e' --> b'bytes'
And t
is structure should be like this
Not only a visible tree structure, but abundant operation supports is provided.
You can just put objects (such as torch.Tensor
, or any other types) here and just
call their methods, like this
import torch
from treevalue import FastTreeValue
t = FastTreeValue({
'a': torch.rand(2, 5),
'x': {
'c': torch.rand(3, 4),
}
})
print(t)
# <FastTreeValue 0x7f8c069346a0>
# ├── a --> tensor([[0.3606, 0.2583, 0.3843, 0.8611, 0.5130],
# │ [0.0717, 0.1370, 0.1724, 0.7627, 0.7871]])
# └── x --> <FastTreeValue 0x7f8ba6130f40>
# └── c --> tensor([[0.2320, 0.6050, 0.6844, 0.3609],
# [0.0084, 0.0816, 0.8740, 0.3773],
# [0.6523, 0.4417, 0.6413, 0.8965]])
print(t.shape) # property access
# <FastTreeValue 0x7f8c06934ac0>
# ├── a --> torch.Size([2, 5])
# └── x --> <FastTreeValue 0x7f8c069346d0>
# └── c --> torch.Size([3, 4])
print(t.sin()) # method call
# <FastTreeValue 0x7f8c06934b80>
# ├── a --> tensor([[0.3528, 0.2555, 0.3749, 0.7586, 0.4908],
# │ [0.0716, 0.1365, 0.1715, 0.6909, 0.7083]])
# └── x --> <FastTreeValue 0x7f8c06934b20>
# └── c --> tensor([[0.2300, 0.5688, 0.6322, 0.3531],
# [0.0084, 0.0816, 0.7669, 0.3684],
# [0.6070, 0.4275, 0.5982, 0.7812]])
print(t.reshape((2, -1))) # method with arguments
# <FastTreeValue 0x7f8c06934b80>
# ├── a --> tensor([[0.3606, 0.2583, 0.3843, 0.8611, 0.5130],
# │ [0.0717, 0.1370, 0.1724, 0.7627, 0.7871]])
# └── x --> <FastTreeValue 0x7f8c06934b20>
# └── c --> tensor([[0.2320, 0.6050, 0.6844, 0.3609, 0.0084, 0.0816],
# [0.8740, 0.3773, 0.6523, 0.4417, 0.6413, 0.8965]])
print(t[:, 1:-1]) # index operator
# <FastTreeValue 0x7f8ba5c8eca0>
# ├── a --> tensor([[0.2583, 0.3843, 0.8611],
# │ [0.1370, 0.1724, 0.7627]])
# └── x --> <FastTreeValue 0x7f8ba5c8ebe0>
# └── c --> tensor([[0.6050, 0.6844],
# [0.0816, 0.8740],
# [0.4417, 0.6413]])
print(1 + (t - 0.8) ** 2 * 1.5) # math operators
# <FastTreeValue 0x7fdfa5836b80>
# ├── a --> tensor([[1.6076, 1.0048, 1.0541, 1.3524, 1.0015],
# │ [1.0413, 1.8352, 1.2328, 1.7904, 1.0088]])
# └── x --> <FastTreeValue 0x7fdfa5836880>
# └── c --> tensor([[1.1550, 1.0963, 1.3555, 1.2030],
# [1.0575, 1.4045, 1.0041, 1.0638],
# [1.0782, 1.0037, 1.5075, 1.0658]])
For more quick start explanation and further usage, take a look at:
Speed Performance
Here is the speed performance of all the operations in FastTreeValue
, the following table is the performance comparison result with dm-tree.
flatten | flatten(with path) | mapping | mapping(with path) | |
---|---|---|---|---|
treevalue | --- | 511 ns ± 6.92 ns | 3.16 µs ± 42.8 ns | 1.58 µs ± 30 ns |
flatten | flatten_with_path | map_structure | map_structure_with_path | |
dm-tree | 830 ns ± 8.53 ns | 11.9 µs ± 358 ns | 13.3 µs ± 87.2 ns | 62.9 µs ± 2.26 µs |
The following 2 tables are the performance comparison result with jax pytree.
mapping | mapping(with path) | flatten | unflatten | flatten_values | flatten_keys | |
---|---|---|---|---|---|---|
treevalue | 2.21 µs ± 32.2 ns | 2.16 µs ± 123 ns | 515 ns ± 7.53 ns | 601 ns ± 5.99 ns | 301 ns ± 12.9 ns | 451 ns ± 17.3 ns |
tree_map | (Not Implemented) | tree_flatten | tree_unflatten | tree_leaves | tree_structure | |
jax pytree | 4.67 µs ± 184 ns | --- | 1.29 µs ± 27.2 ns | 742 ns ± 5.82 ns | 1.29 µs ± 22 ns | 1.27 µs ± 16.5 ns |
flatten + all | flatten + reduce | flatten + reduce(with init) | rise(given structure) | rise(automatic structure) | |
---|---|---|---|---|---|
treevalue | 425 ns ± 9.33 ns | 702 ns ± 5.93 ns | 793 ns ± 13.4 ns | 9.14 µs ± 129 ns | 11.5 µs ± 182 ns |
tree_all | tree_reduce | tree_reduce(with init) | tree_transpose | (Not Implemented) | |
jax pytree | 1.47 µs ± 37 ns | 1.88 µs ± 27.2 ns | 1.91 µs ± 47.4 ns | 10 µs ± 117 ns | --- |
This is the comparison between dm-tree, jax-libtree and us, with flatten
and mapping
operations (lower value means less time cost and runs faster)
The following table is the performance comparison result with tianshou Batch.
get | set | init | deepcopy | stack | cat | split | |
---|---|---|---|---|---|---|---|
treevalue | 51.6 ns ± 0.609 ns | 64.4 ns ± 0.564 ns | 750 ns ± 14.2 ns | 88.9 µs ± 887 ns | 50.2 µs ± 771 ns | 40.3 µs ± 1.08 µs | 62 µs ± 1.2 µs |
tianshou Batch | 43.2 ns ± 0.698 ns | 396 ns ± 8.99 ns | 11.1 µs ± 277 ns | 89 µs ± 1.42 µs | 119 µs ± 1.1 µs | 194 µs ± 1.81 µs | 653 µs ± 17.8 µs |
And this is the comparison between tianshou Batch and us, with cat
, stack
and split
operations (lower value means less time cost and runs faster)
Test benchmark code can be found here:
Extension
If you need to translate treevalue
object to runnable source code, you may use the potc-treevalue plugin with the installation command below
pip install potc-treevalue
Or just install it with treevalue
itself
pip install treevalue[potc]
In potc, you can translate the objects to runnable python source code, which can be loaded to objects afterwards by the python interpreter, like the following graph
For more information, you can refer to
Contribution
We appreciate all contributions to improve treevalue, both logic and system designs. Please refer to CONTRIBUTING.md for more guides.
And users can join our slack communication channel, or contact the core developer HansBug for more detailed discussion.
License
treevalue
released under the Apache 2.0 license.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distributions
Hashes for treevalue-1.3.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | e7dbae14b54191ba0bbf5c33caa9cfbf0ddb5adbb0a1d02adebc68e30694bc02 |
|
MD5 | 5b4fb5699848c7ae5941ff96bef53f35 |
|
BLAKE2b-256 | e605d98c794d40a3e5768be65ffe287b02ece6dce4ddb94f7cd2abee52ef332a |
Hashes for treevalue-1.3.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | d5e3d1e72f30bee89030205c404df26f92c19c55a708ab27dd0e4959a6ac910d |
|
MD5 | aba261e3958b6a814a8f279b72d370c2 |
|
BLAKE2b-256 | 594257dd8787fe5d66fecdb6c9ecbf94fbb2cd3df9b94ba0be9310a9f7ccef00 |
Hashes for treevalue-1.3.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 94b8b07506fc37493bd3a56a5ecdaf26c6d842996153d73d47be8c7abdacc27a |
|
MD5 | 77cd6022fbb815a8fac006ddf40796a0 |
|
BLAKE2b-256 | fc525c53424bd2707c3fdf87a43f7fb6589bb17c535b15938b737b47e88612d3 |
Hashes for treevalue-1.3.1-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | ee2c43fbc6a2ae8d5647b385e30cc849fc3c410e038814489959820757cdff76 |
|
MD5 | a51929deb1bcdeb589044e962f273d7e |
|
BLAKE2b-256 | b358012447d476596694aaa3ddbdced69ce834ce60fa677301b349482fb95236 |