Secret Santa randomizer
This repository implements a basic Python version of a Secret Santa utility. It is meant to serve as a tutorial for beginners interested in Python package development. Each section below mentions typical tools and utilities in a natural order of developing Python packages.
Table of Contents
a. Virtual environments
b. Project requirements
a. PyCharm file types
b. Type hints
c. Property testing d. Mocks in unit tests
- Usage / Jupyter notebook
- Continuous Integration
We assume PyCharm on Ubuntu >= 16.04 as the development environment.
In PyCharm, check out this repository into a new project, e.g. under
VCS > Checkout from Version Control
Shell commands below should be entered in the Terminal pane of PyCharm.
There is no shortcut in PyCharm to send code from the editor to the terminal, so you need to copy-paste commands instead.
We'll use a virtual environment to keep things neat and tidy.
A couple of useful references about virtual environments if you've never used them before:
Install support for virtual environments with Python 3.x if you don't have it yet:
sudo apt-get install python3-venv
Configure the PyCharm project with a Python 3 virtual environment under
File > Settings > Project > Project interpreter
Click on the top-right gear icon and select
Add..., then create a new
Virtualenv Environment, using
as location and Python 3.x as interpreter. Also un-tick all checkboxes.
We do not use
pipenv here. You may however use it to create a new environment
in the same way.
With these settings, anything you execute within the PyCharm project, either at the Terminal or in the Python Console, will run in the virtual environment. Close and re-open PyCharm to make sure the settings are picked up.
Note that you can still temporarily leave the virtual environment from an active Terminal using
and re-activate it using
You can also switch to a different project interpreter in PyCharm (Ctrl + Shift + A, search for
Switch Project Interpreter).
Open terminals and Python consoles then need to be restarted for the environment to match the project interpreter.
The project includes files
requirements-package.in, defining module / package dependencies.
Such files are compiled into an actual
which is not committed to Git and should be re-created for the local checkout.
Important: make sure all commands are executed inside the virtual environment, e.g. at such a prompt:
#> (venv) localuser@Ubuntu:~/PyCharm/secretsanta$
Check version of Python, upgrade pip and check its version:
python --version #> Python 3.6.7 pip install --upgrade pip #> ... pip --version #> pip 19.1.1 from /home/localuser/PyCharm/secretsanta/venv/lib/python3.5/site-packages/pip (python 3.6)
pip install pip-tools
List installed modules:
pip list #> Package Version #> ------------- ------- #> Click 7.0 #> pip 18.1 #> pip-tools 3.1.0 #> pkg-resources 0.0.0 #> setuptools 20.7.0 #> six 1.11.0
Install dependencies defined in
Alternatively, you can right-click on the
secretsanta project folder in the
Project explorer and click
Synchronize 'secretsanta' to refresh and see the generated file
Now you're ready to go. Would there be any update to the
make sure you re-execute
If you change the virtual environment you work with, you should instead run
pip-compile -U (then
pip-sync) to make sure that compatible versions of your dependencies are used in the new environment.
There are multiple ways to define and execute tests. Two of the most common ones are
doctest module allows to run code examples / tests that are defined as part of
Use the following command to see this in action. The
-v flag allows us to see verbose output.
In case everything is fine, we would not see any output otherwise.
python -m doctest secretsanta/main/core.py -v
It is possible to run code style checks with flake8:
If all is fine, you will not see any output.
Unit tests are kept under
tests and make use of the
Run tests using tox:
tox.ini file contains the following configurations:
flake8(checks code style and reports potential issues)
pytest(which is used as a test runner)
pytest-cov(measures and reports test coverage, see also
tox(where the Python versions to test with are defined)
If you run
tox outside of the virtual environment, it can run tests for multiple Python versions - this is configured
envlist.The tests will only be run for any Python version that is available in the environment where you run them
skip_missing_interpreters configuration key).
PyCharm file types
In PyCharm, you can associate files to a certain type under:
File > Settings > Editor > File Types
E.g. use this to get
.coveragerc marked up as
INI (you can do this after installing the .ini support PyCharm plugin).
Alternatively, you can register the
.coveragerc patterns to the existing Buildout Config
Type hints define what type function arguments and return values should be. They are both a source of documentation and testing framework to identify bugs more easily, see also PEP 484.
In order to use them, install mypy (outside of a virtual environment):
sudo apt install mypy
Then run e.g.:
to test if the type hints of
.py file(s) are correct (in which case there may be no output).
We use Hypothesis to define a property test for our matching function: generated example inputs are tested against desired properties. Hypothesis' generator can be configured to produce typical data structures, filled with various instances of primitive types. This is done by composing specific annotations.
- The decorator
@given(...)must be present before the test function that shall use generated input.
- Generated arguments are defined in a comma-separated list, and will be passed to the test function in order:
from hypothesis import given from hypothesis.strategies import text, integers @given(text(), integers()) def test_some_thing(a_string, an_int): return
- Generation can be controlled by various optional parameters, e.g.
text(min_size=2)for testing with strings that have at least 2 characters.
Mocks in unit tests
Mock objects are used to avoid external side effects. We use the standard Python package
unittest.mock. This provides
@patch decorator, which allows us to specify classes to be mocked within the scope of a given test case. See
test_funs.py and test_core.py for examples.
Documentation is done using Sphinx.
Prerequisite: Installation. Open a terminal (outside of a virtual environment) and run below command:
sudo apt-get install python3-sphinx
Check installation (and version):
Initializing documentation - already done - for reference:
This will lead through an interactive generation process.
Suggested values / options are listed here. Hitting enter without typing anything will take the suggested default shown inside square brackets [ ].
- Root path for the documentation: docs
- Separate source and build directories: y
- Source file suffix: .rst
- Sphinx extensions: autodoc, doctest, intersphinx, coverage, mathjax, viewcode
- Create Makefile: y
In order to use
autodoc, one needs to uncomment the corresponding line in
And set the appropriate path to the directory containing the modules to be documented.
For Sphinx/autodoc to work, the docstrings must be written in correct reStructuredText, see documentation for details.
You should be inside the documentation root directory.
Using the Makefile:
cd docs make html
You can view the documentation by right-click opening
docs/build/html) in your browser of choice.
Previewing the .rst files does not work properly in PyCharm, apparently because
it only supports a subset of Sphinx.
Alternative build without Makefile:
sphinx-build -b html <sourcedir> <builddir>
The Jupyter notebook
SecretSanta.ipynb illustrates the usage of the
It can be run in your browser (or directly in PyCharm if you have the professional edition):
jupyter notebook SecretSanta.ipynb
Below gives you some useful information about the location of
Jupyter related directories, e.g. configuration:
A few additional links to some typical early
Continuous Integration (CI) aims to keep state updated to always match the code currently checked in a repository. This typically includes a build, automated test runs, and possibly making sure that the newly built artifacts are deployed to a target environment. This helps developers and users by providing timely feedback and showing what the results of certain checks were on a given version of the code.
This repository uses Travis CI to run tests automatically when new commits are pushed. Results can be viewed here. Along with test results, coverage information is generated and uploaded to codecov, which generates a report out of it.
Travis CI is configured using the
.travis.yml file. This allows specifying the environment(s) to run
tests in; tests will be run for each specified environment. The steps required before running tests are specified under
install. Finally, the task to run is defined in
script, and we make sure coverage reports are uploaded (see
after_success). A notification about completed builds is sent to our Slack channel using a
secure notification hook.
Codecov is configured in
codecov.yml, defining the coverage value range (in percent) to match to a color scale, as
well as the coverage checks to be performed and their success criteria. See codecov's
general configuration and
commit status evaluation documentation for more information.
Notifications from codecov can only be delivered via unencrypted webhook URLs. In order to avoid exposing such hooks in a public repository, we do not use this functionality here.
MANIFEST.inspecifies extra files that shall be included in a source distribution.
- Badges: This README features various badges (at the beginning), including a build status badge and a code coverage status badge.
Release history Release notifications | RSS feed
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Hashes for secretsanta-0.1.0-py3-none-any.whl