Set of pytorch modules and utils to train code2seq model
Project description
code2seq
PyTorch's implementation of code2seq model.
Installation
You can easily install model through the PIP:
pip install code2seq
Usage
Minimal code example to run the model:
from os.path import join
import hydra
from code2seq.dataset import PathContextDataModule
from code2seq.model import Code2Seq
from code2seq.utils.vocabulary import Vocabulary
from omegaconf import DictConfig
from pytorch_lightning import Trainer
@hydra.main(config_path="configs")
def train(config: DictConfig):
vocabulary_path = join(config.data_folder, config.dataset.name, config.vocabulary_name)
vocabulary = Vocabulary.load_vocabulary(vocabulary_path)
model = Code2Seq(config, vocabulary)
data_module = PathContextDataModule(config, vocabulary)
trainer = Trainer(max_epochs=config.hyper_parameters.n_epochs)
trainer.fit(model, datamodule=data_module)
if __name__ == "__main__":
train()
Navigate to code2seq/configs to see examples of configs. If you had any questions then feel free to open the issue.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
code2seq-0.0.2.tar.gz
(20.7 kB
view hashes)
Built Distribution
code2seq-0.0.2-py3-none-any.whl
(32.2 kB
view hashes)