Automatic documentation generator from AutoRA code
Project description
AutoDoc
This project was automatically generated using the LINCC-Frameworks python-project-template. For more information about the project template see the documentation.
Dev Guide - Getting Started
Before installing any dependencies or writing code, it's a great idea to create a
virtual environment. We recommend using conda
to manage virtual
environments. If you have conda installed locally, you can run the following to
create and activate a new environment.
>> conda create env -n <env_name> python=3.8
>> conda activate <env_name>
Once you have created a new environment, you can install this project for local development using the following commands:
>> pip install -e .'[dev,train]'
>> pre-commit install
>> conda install pandoc
Notes:
- The single quotes around
'[dev]'
may not be required for your operating system. - Look at
pyproject.toml
for other optional dependencies, e.g. you can dopip install -e ."[dev,train,cuda]"
if you want to use CUDA. pre-commit install
will initialize pre-commit for this local repository, so that a set of tests will be run prior to completing a local commit. For more information, see the Python Project Template documentation on pre-commit- Install
pandoc
allows you to verify that automatic rendering of Jupyter notebooks into documentation for ReadTheDocs works as expected. For more information, see the Python Project Template documentation on Sphinx and Python Notebooks
Running AzureML pipelines
This repo contains the evaluation and training pipelines for AutoDoc.
Prerequisites
Add the ML extension:
az extension add --name ml
Configure the CLI:
az login
az account set --subscription "<your subscription name>"
az configure --defaults workspace=<aml workspace> group=<resource group> location=<location, e.g. westus3>
Running jobs
Prediction
az ml job create -f azureml/eval.yml --set display_name="Test prediction job" --set environment_variables.HF_TOKEN=<your huggingface token> --web
Notes:
--name
will set the mlflow run id--display_name
becomes the name in the experiment dashboard--web
argument will pop-up a browser window for tracking the job.- The
HF_TOKEN
is required for gated repos, which need authentication
Uploading data
Example:
az storage blob upload --account-name <account> --container <container>> --file data/data.jsonl -n data/sweetpea/data.jsonl
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file autora_doc-0.0.2.tar.gz
.
File metadata
- Download URL: autora_doc-0.0.2.tar.gz
- Upload date:
- Size: 9.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/4.0.2 CPython/3.11.7
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 694c4d8703eefe98e2a3c85a2e8b4e3eb6b0171b4851ae2455a6fb8a81c0a168 |
|
MD5 | d3d034563b30ccc2da26bee57a5feac9 |
|
BLAKE2b-256 | 20697f6964cae47d59bb51866e4bf4227bb448b84534b7085a1c1b2c6297029e |
File details
Details for the file autora_doc-0.0.2-py3-none-any.whl
.
File metadata
- Download URL: autora_doc-0.0.2-py3-none-any.whl
- Upload date:
- Size: 9.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/4.0.2 CPython/3.11.7
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 7f5167faea25a7cf0dd98ec36e36201992c8a230fe8204bc39af79fe4b0076cd |
|
MD5 | 33c7f93739a12682980e0a14fe40259b |
|
BLAKE2b-256 | f59c47bdec70139895bb834b67b482d4a71ace9eb60419841e01e8f72f9a4ba2 |