NIH SPARC Python Client
Project description
sparc.client
NIH SPARC Python Client
Architecture details
The sparc.client Python Client stores its configuration in the config.ini file.
The modules of sparc.client are to be defined in services/ directory and need to be derived from BaseService class (services/_default.py) This means that they need to implement the specific functions defined in the interface, such as init, connect(), info(), get_profile(), set_profile() and close(). Apart from that functions, each module in the services may define their own methods (e.g. refer to services/pennsieve.py list_datasets()).
config.ini
The configuration file has the following format:
[global]
default_profile=ci
[prod]
pennsieve_profile_name=prod
[dev]
pennsieve_profile_name=test
[ci]
pennsieve_profile_name=ci
[global] section defines the default profile that is to be used. This basically refers to any section in brackets that stores configuration variables. In this case it refers to 'ci' section.
Within each section, different configuration variables could be defined. In our case, the only variable that needs to be defined is pennsieve_profile_name, which is passed to the Pennsieve2 library.
Module automatic import
Each python file in services/ folder with defined class name that is derived from BaseService is imported as a module to SparcClient class.
For example, Pennsieve module could be used in the following way:
from sparc.client import SparcClient
client = SparcClient(connect=False, config_file='config/config.ini')
# Run module prerequisities, e.g. start Pennsieve agent in the background
!pennsieve agent
# connect to the Pennsieve module, get Pennsieve Agent object
client.pennsieve.connect()
# execute internal functions of the module
client.pennsieve.info()
# alternatively connect all the services available
client.connect() #connect to all services
Test generation - PyTest
Some good resource for implementing tests could be found at Medium.
Documentation - Sphinx tutorial
A fresh start for creating documentation with Sphinx could be found at towardsdatascience. To reproduce steps:
- Create a docs folder
- Run
sphinx-quickstart
in docs folder, fill the required prompts. - Edit
conf.py
andindex.rst
files to adjust them to your needs - Run in docs folder sphinx-apidoc -o . ../src
- Disregard
modules.rst
andsphinx.rst
, attachsphinx.client
to toctree inindex.rst
- Run
make html
in docs folder.
Contribution Guide
- Define configuration variables in config.ini file (e.g api_key, api_secret etc.)
- Create a file in services/
- Create a class within this file that extends BaseService
- The class needs to define all the functions required + may add its own.
Developer Setup
Run pip install -e '.[test]'
to install the dependencies needed for a development environment.
Run pytest --cov=./src
to run the tests and get a test coverage summary.
Run pytest --cov-report html --cov=./src
to run the tests and get a full HTML coverage report output to htmlcov
.
Run python -m build
to check if your package builds successfully.
The process is currently automated using Github Action in CI.yml.
Software releasing guidelines
The process of releasing new version of the software is fully automated.
This means that CHANGELOG.md
as well as release commands are automatically generated.
The versioning is fully dynamic using git tags.
Please also note that there is no package/software version pyproject.toml. We use dynamic versioning provided by setuptools_scm
Also file sparc.client/_version.py should not be committed to the repository.
How commits should look like?
We are using Semantic Commit Messages for commits.
Basically this means that important commits should start with one of the following prefixes: chore:, docs:, feat:, fix:, refactor:, style:, or test:.
Additionally, we ask to refer to the issue number on Github (by adding a reference, e.g. #24234
)
Refer to issues
Releasing a new version
-
In order to release a new version, an action 'Create new release' needs to be launched from Github Actions menu. Please navigate to 'Actions' and click 'Create new release' on the left hand side. On the right hand side, you can click 'Run workflow'
-
After launching a workflow, specify manually a version. The version needs to start with 'v', e.g. 'v0.0.34'.
-
Launching a workflow checks for the user permission (needs to be admin to the repository) and runs CI in order to verify integrity of the software.
-
If the CI/CD test passes, a temporary tag is created. The commits which follow the symantic versioning naming convention are then used to create and update CHANGELOG.md.
-
Once CHANGELOG.md is pushed to the main branch, the new version is tagged again and the software is released to Github and to PyPI.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file sparc_client-0.3.1.tar.gz
.
File metadata
- Download URL: sparc_client-0.3.1.tar.gz
- Upload date:
- Size: 781.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/5.1.0 CPython/3.12.4
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | a89aa229c1cf318fb9cd7fbf4e3f18fa6177f88c91a4c7a04d6c0f90a54a0ccb |
|
MD5 | aafdce3f63699971af196a6f79df5092 |
|
BLAKE2b-256 | d6451e233d432353cbdf110f6d5153c152861482714055453633e0995270c86d |
File details
Details for the file sparc.client-0.3.1-py3-none-any.whl
.
File metadata
- Download URL: sparc.client-0.3.1-py3-none-any.whl
- Upload date:
- Size: 26.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/5.1.0 CPython/3.12.4
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | b8e8f01e7d831023cd41ce8d6c1165756833867e6f9c5b460a29c0a31e052d4f |
|
MD5 | 47896149a26d1e4e189213cd1980c374 |
|
BLAKE2b-256 | 7a67274519cb078cf3c950f3931c675112bae309591dbd7eab2d4dcc4f51ed50 |