Pretrained BERT models for encoding clinical trial documents to compact embeddings.
Project description
Trial2Vec
Findings of EMNLP'22 | Trial2Vec: Zero-Shot Clinical Trial Document Similarity Search using Self-Supervision
Useage
Get pretrained Trial2Vec model in three lines:
from trial2vec import Trial2Vec
model = Trial2Vec()
model.from_pretrained()
How to install
Install the correct PyTorch
version by referring to https://pytorch.org/get-started/locally/.
Then install Trial2Vec
by
pip install git+https://github.com/RyanWangZf/Trial2Vec.git
Search similar trials
Use Trial2Vec
to search similar clinical trials:
# load demo data
from trial2vec import load_demo_data
data = load_demo_data()
# contains trial documents
test_data = {'x': data['x']}
# make prediction
pred = model.predict(test_data)
Encode trials
Use Trial2Vec
to encode clinical trial documents:
test_data = {'x': df} # contains trial documents
emb = model.encode(test_data) # make inference
# or just find the pre-encoded trial documents
emb = [model[nct_id] for test_data['x']['nct_id']]
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Trial2Vec-0.0.1.tar.gz
(22.8 kB
view hashes)
Built Distribution
Trial2Vec-0.0.1-py3-none-any.whl
(25.9 kB
view hashes)
Close
Hashes for Trial2Vec-0.0.1-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 0a4738db9a862d84d2516977d40550d510c423019c031f211794e1f207809e30 |
|
MD5 | 5295b470bc7790c7f868d4a75d3cd7b1 |
|
BLAKE2b-256 | 846092137e65aba3cf19e188449705ab90f8c672e6c9d036d7b44bcf8200fc0c |