Skip to main content

View your git repository as a graph

Project description

create conda environment

conda create -n git source activate git (conda activate git on windows) conda install --file requirements.txt

run python module in interpreter

cd git-graph python

import dot_graph as dg dg.DotGraph('..').persist(show=False) dg.DotGraph('../examples/demo', nodes='btc').persist(form='svg', show=False)

run python program

python git-graph/dot_graph.py python git-graph/dot_graph.py -p examples/demo -n btc -f svg

run python program with shebang

./git-graph/dot_graph.py ./git-graph/dot_graph.py -p examples/demo -n btc -f svg

run python program with link in PATH

ln -s ~/workspace/git-graph/git-graph/dot_graph.py /home/hduche/conda/envs/git/bin/gg cd examples/demo gg gg -p examples/demo -n btc -f svg

run as git plugin

ln -s ~/workspace/git-graph/git-graph/dot_graph.py /home/hduche/conda/envs/git/bin/git-graph git graph git graph -p examples/demo -n btc -f svg

Learning Git can seem daunting because of its impressive number of commands. However, Git is so wonderfully lightweight that it is a very good practice to check the impact of a command on a test repository. Git-graph is the tool that will display the content of your repository as a graph to better understand the impact.

View your Git repository as a Directed Acyclic Graph (DAG)

Git is the most famous version control system It exposes its implementation details in the .git folder Learning Git can be quite difficult

Git-Graph proposes to expose the DAG inside your gir repository. your Git repository as a Directed Acyclic Graph (DAG)

It offers

  • on the command line
  • through jupytext.vim, a plugin for Vim that lets you edit Jupyter notebooks represented as markdown documents or Python scripts

Look at Git command effects on

Git stores its internal data as a Directed Acyclic Graph (DAG). It proposes a bunch of powerful commands. As Git is very lightweight, it is really easy to experiment the effects of the different commands. Being able to display git inner DAG after each command considerably helps the learning curve.

This program proposes to display git inner DAG with the simple command: Image...

Build Status codecov.io Language grade: Python

Have you always wished Jupyter notebooks were plain text documents? Wished you could edit them in your favorite IDE? And get clear and meaningfull diffs when doing version control? Then... Jupytext may well be the tool you're looking for!

Jupytext can save Jupyter notebooks as

  • Markdown and R Markdown documents,
  • Julia, Python, R, Bash, Scheme and C++ scripts.

There are multiple ways to use jupytext:

  • on the command line
  • through jupytext.vim, a plugin for Vim that lets you edit Jupyter notebooks represented as markdown documents or Python scripts
  • directly from Jupyter Notebook or Jupyter Lab. Jupytext provides a contents manager that allows Jupyter to save your notebook to your favorite format (.py, .R, .jl, .md, .Rmd...) in addition to (or in place of) the traditional .ipynb file. The text representation can be edited in your favorite editor. When you're done, refresh the notebook in Jupyter: inputs cells are loaded from the text file, while output cells are reloaded from the .ipynb file if present. Refreshing preserves kernel variables, so you can resume your work in the notebook and run the modified cells without having to rerun the notebook in full.

Demo time

Introducing Jupytext PyParis Binder

Looking for a demo?

Example usage

Writing notebooks as plain text

You like to work with scripts? The good news is that plain scripts, which you can draft and test in your favorite IDE, open transparently as notebooks in Jupyter when using Jupytext. Run the notebook in Jupyter to generate the outputs, associate an .ipynb representation, save and share your research as either a plain script or as a traditional Jupyter notebook with outputs.

Collaborating on Jupyter Notebooks

With Jupytext, collaborating on Jupyter notebooks with Git becomes as easy as collaborating on text files.

The setup is straightforward:

  • Open your favorite notebook in Jupyter notebook
  • Associate a .py representation (for instance) to that notebook
  • Save the notebook, and put the Python script under Git control. Sharing the .ipynb file is possible, but not required.

Collaborating then works as follows:

  • Your collaborator pulls your script. The script opens as a notebook in Jupyter, with no outputs.
  • They run the notebook and save it. Outputs are regenerated, and a local .ipynb file is created.
  • They change the notebook, and push their updated script. The diff is nothing else than a standard diff on a Python script.
  • You pull the changed script, and refresh your browser. Input cells are updated. The outputs from cells that were changed are removed. Your variables are untouched, so you have the option to run only the modified cells to get the new outputs.

Code refactoring

In the animation below we propose a quick demo of Jupytext. While the example remains simple, it shows how your favorite text editor or IDE can be used to edit your Jupyter notebooks. IDEs are more convenient than Jupyter for navigating through code, editing and executing cells or fractions of cells, and debugging.

  • We start with a Jupyter notebook.
  • The notebook includes a plot of the world population. The plot legend is not in order of decreasing population, we'll fix this.
  • We want the notebook to be saved as both a .ipynb and a .py file: we add a "jupytext": {"formats": "ipynb,py"}, entry to the notebook metadata.
  • The Python script can be opened with PyCharm:
    • Navigating in the code and documentation is easier than in Jupyter.
    • The console is convenient for quick tests. We don't need to create cells for this.
    • We find out that the columns of the data frame were not in the correct order. We update the corresponding cell, and get the correct plot.
  • The Jupyter notebook is refreshed in the browser. Modified inputs are loaded from the Python script. Outputs and variables are preserved. We finally rerun the code and get the correct plot.

Installation

Conda Version Pypi pyversions

Jupytext is available on pypi and on conda-forge. Run either of

pip install jupytext --upgrade

or

conda install -c conda-forge jupytext

Then, configure Jupyter to use Jupytext:

  • generate a Jupyter config, if you don't have one yet, with jupyter notebook --generate-config
  • edit .jupyter/jupyter_notebook_config.py and append the following:
c.NotebookApp.contents_manager_class = "jupytext.TextFileContentsManager"

(note that our contents manager accepts a few options: default formats, default metadata filter, etc — read more on this below).

  • and restart Jupyter, i.e. run
jupyter notebook

Per-notebook configuration

Configure the multiple export formats for the current notebook by adding a "jupytext": {"formats": "ipynb,py"}, entry to the notebook metadata with Edit/Edit Notebook Metadata in Jupyter's menu:

{
  "jupytext": {"formats": "ipynb,py"},
  "kernelspec": {
    (...)
  },
  "language_info": {
    (...)
  }
}

Accepted formats are composed of an extension, like ipynb, md, Rmd, jl, py, R, sh, cpp... and an optional format name among light (default for Julia, Python), percent, sphinx, spin (default for R) — see below for the format specifications. Use ipynb,py:percent if you want to pair the .ipynb notebook with a .py script in the percent format. To have the script extension chosen according to the Jupyter kernel, use the auto extension.

Jupytext accepts a few additional options:

  • comment_magics: By default, Jupyter magics are commented when notebooks are exported to any other format than markdown. If you prefer otherwise, use this boolean option, or is global counterpart (see below).
  • metadata_filter.notebook: By default, Jupytext only exports the kernelspec and jupytext metadata to the text files. Set "jupytext": {"metadata_filter": {"notebook": "-all"}} if you want that the script has no notebook metadata at all. The value for metadata_filter.notebook is a comma separated list of additional/excluded (negated) entries, with all a keyword that allows to exclude all entries.
  • metadata_filter.cells: By default, cell metadata autoscroll, collapsed, scrolled, trusted and ExecuteTime are not included in the text representation. Add or exclude more cell metadata with this option.

Global configuration

Jupytext's contents manager also accepts global options. We start with the default format pairing. Say you want to always associate every .ipynb notebook with a .md file (and reciprocally). This is simply done by adding the following to your Jupyter configuration file:

# Always pair ipynb notebooks to md files
c.ContentsManager.default_jupytext_formats = "ipynb,md"

(and similarly for the other formats).

In case the percent format is your favorite, add the following to your .jupyter/jupyter_notebook_config.py file:

# Use the percent format when saving as py
c.ContentsManager.preferred_jupytext_formats_save = "py:percent"

and then, Jupytext will understand "jupytext": {"formats": "ipynb,py"}, as an instruction to create the paired Python script in the percent format.

Default metadata filtering

You can specify which metadata to include or exclude in the text files created by Jupytext by default by setting c.ContentsManager.default_notebook_metadata_filter (notebook metadata) and c.ContentsManager.default_cell_metadata_filter (cell metadata). They accept a string of comma separated keywords. A minus sign - in font of a keyword means exclusion.

Suppose you want to keep all the notebook metadata but widgets and varInspector in the YAML header. For cell metadata, you want to allow ExecuteTime and autoscroll, but not hide_output. You can set

c.ContentsManager.default_notebook_metadata_filter = "all,-widgets,-varInspector"
c.ContentsManager.default_cell_metadata_filter = "ExecuteTime,autoscroll,-hide_output"

If you want that the text files created by Jupytext have no metadata, you may use the global metadata filters below. Please note that with this setting, the metadata is only preserved in the .ipynb file — be sure to open that file in Jupyter, and not the text file which will miss the pairing information.

c.ContentsManager.default_notebook_metadata_filter = "-all"
c.ContentsManager.default_cell_metadata_filter = "-all"

Finally, if you want that Jupytext exports no other metadata that the one already present in pre-existing scripts or markdowns files, use:

# Do not add new metadata when editing a markdown document or a script
c.ContentsManager.freeze_metadata = True

NB: All these global options (and more) are documented here.

Command line conversion

The package provides a jupytext script for command line conversion between the various notebook extensions:

jupytext --to python notebook.ipynb             # create a notebook.py file
jupytext --to py:percent notebook.ipynb         # create a notebook.py file in the double percent format
jupytext --to py:percent --comment-magics false notebook.ipynb   # create a notebook.py file in the double percent format, and do not comment magic commands
jupytext --to markdown notebook.ipynb           # create a notebook.md file
jupytext --output script.py notebook.ipynb      # create a script.py file

jupytext --to notebook notebook.py              # overwrite notebook.ipynb (remove outputs)
jupytext --to notebook --update notebook.py     # update notebook.ipynb (preserve outputs)
jupytext --to ipynb notebook1.md notebook2.py   # overwrite notebook1.ipynb and notebook2.ipynb

jupytext --to md --test notebook.ipynb          # Test round trip conversion

jupytext --to md --output - notebook.ipynb      # display the markdown version on screen
jupytext --from ipynb --to py:percent           # read ipynb from stdin and write double percent script on stdout

Jupytext is also available as a Git pre-commit hook. Use this if you want Jupytext to create and update the .py (or .md...) representation of the staged .ipynb notebooks. All you need is to create an executable .git/hooks/pre-commit file with the following content:

#!/bin/sh
jupytext --to py:light --pre-commit

If you don't want notebooks to be committed (and only commit the representations), you can ask the pre-commit hook to unstage notebooks after conversion by adding the following line:

git reset HEAD **/*.ipynb

Jupytext does not offer a merge driver. If a conflict occurs, solve it on the text representation and then update or recreate the .ipynb notebook. Or give a try to nbdime and its merge driver.

Reading notebooks in Python

Manipulate notebooks in a Python shell or script using jupytext's main functions:

# Read notebook from file, given format name (guess format when `format_name` is None)
readf(nb_file, format_name=None)

# Read notebook from text, given extension and format name
reads(text, ext, format_name=None, [...])

# Return the text representation for the notebook, given extension and format name
writes(notebook, ext, format_name=None, [...])

# Write notebook to file in desired format
writef(notebook, nb_file, format_name=None)

Round-trip conversion

Representing Jupyter notebooks as scripts requires a solid round trip conversion. You don't want your notebooks (nor your scripts) to be modified because you are converting them to the other form. A few hundred tests ensure that round trip conversion is safe.

You can easily test that the round trip conversion preserves your Jupyter notebooks and scripts. Run for instance:

# Test the ipynb -> py:percent -> ipynb round trip conversion
jupytext --test notebook.ipynb --to py:percent

# Test the ipynb -> (py:percent + ipynb) -> ipynb (à la paired notebook) conversion
jupytext --test --update notebook.ipynb --to py:percent

Note that jupytext --test compares the resulting notebooks according to its expectations. If you wish to proceed to a strict comparison of the two notebooks, use jupytext --test-strict, and use the flag -x to report with more details on the first difference, if any.

Please note that

  • When you associate a Jupyter kernel with your text notebook, that information goes to a YAML header at the top of your script or Markdown document. And Jupytext itself may create a jupytext entry in the notebook metadata. Have a look at the freeze_metadata option if you want to avoid this.
  • Cell metadata are available in light and percent formats for all cell types. Sphinx Gallery scripts in sphinx format do not support cell metadata. R Markdown and R scripts in spin format support cell metadata for code cells only. Markdown documents do not support cell metadata.
  • By default, a few cell metadata are not included in the text representation of the notebook. And only the most standard notebook metadata are exported. Learn more on this in the sections for notebook specific and global settings for metadata filtering.
  • Representing a Jupyter notebook as a Markdown or R Markdown document has the effect of splitting markdown cells with two consecutive blank lines into multiple cells (as the two blank line pattern is used to separate cells).

Format specifications

Markdown and R Markdown

Our implementation for Jupyter notebooks as Markdown or R Markdown documents is straightforward:

  • A YAML header contains the notebook metadata (Jupyter kernel, etc)
  • Markdown cells are inserted verbatim, and separated with two blank lines
  • Code and raw cells start with triple backticks collated with cell language, and end with triple backticks. Cell metadata are not available in the Markdown format. The code cell options in the R Markdown format are mapped to the corresponding Jupyter cell metadata options, when available.

See how our World population.ipynb notebook in the demo folder is represented in Markdown or R Markdown.

The light format for notebooks as scripts

The light format was created for this project. It is the default format for Python and Julia scripts. That format can read any script as a Jupyter notebook, even scripts which were never prepared to become a notebook. When a notebook is written as a script using this format, only a few cells markers are introduced—none if possible.

The light format has:

  • A (commented) YAML header, that contains the notebook metadata.
  • Markdown cells are commented, and separated with a blank line.
  • Code cells are exported verbatim (except for Jupyter magics, which are commented), and separated with blank lines. Code cells are reconstructed from consistent Python paragraphs (no function, class or multiline comment will be broken).
  • Cells that contain more than one Python paragraphs need an explicit start-of-cell delimiter # + (// + in C++, etc). Cells that have explicit metadata have a cell header # + {JSON} where the metadata is represented, in JSON format. The end of cell delimiter is # -, and is omitted when followed by another explicit start of cell marker.

The light format is currently available for Python, Julia, R, Bash, Scheme and C++. Open our sample notebook in the light format here.

The percent format

The percent format is a representation of Jupyter notebooks as scripts, in which cells are delimited with a commented double percent sign # %%. The format was introduced by Spyder five years ago, and is now supported by many editors, including

Our implementation of the percent format is compatible with the original specifications by Spyder. We extended the format to allow markdown cells and cell metadata. Cell headers have the following structure:

# %% Optional text [cell type] {optional JSON metadata}

where cell type is either omitted (code cells), or [markdown] or [raw]. The content of markdown and raw cells is commented out in the resulting script.

Percent scripts created by Jupytext have a header with an explicit format information. The format of scripts with no header is inferred automatically: scripts with at least one # %% cell are identified as percent scripts.

The percent format is currently available for Python, Julia, R, Bash, Scheme and C++. Open our sample notebook in the percent format here.

If the percent format is your favorite, add the following to your .jupyter/jupyter_notebook_config.py file:

c.ContentsManager.preferred_jupytext_formats_save = "py:percent" # or "auto:percent"

Then, Jupytext's content manager will understand "jupytext": {"formats": "ipynb,py"}, as an instruction to create the paired Python script in the percent format.

By default, Jupyter magics are commented in the percent representation. If you are using percent scripts in Hydrogen and you want to preserve Jupyter magics, then add a metadata "jupytext": {"comment_magics": false}," to your notebook, or add

c.ContentsManager.comment_magics = False

to Jupyter's configuration file.

Sphinx-gallery scripts

Another popular notebook-like format for Python script is the Sphinx-gallery format. Scripts that contain at least two lines with more than twenty hash signs are classified as Sphinx-Gallery notebooks by Jupytext.

Comments in Sphinx-Gallery scripts are formatted using reStructuredText rather than markdown. They can be converted to markdown for a nicer display in Jupyter by adding a c.ContentsManager.sphinx_convert_rst2md = True line to your Jupyter configuration file. Please note that this is a non-reversible transformation—use this only with Binder. Revert to the default value sphinx_convert_rst2md = False when you edit Sphinx-Gallery files with Jupytext.

Turn a GitHub repository containing Sphinx-Gallery scripts into a live notebook repository with Binder and Jupytext by adding only two files to the repo:

  • binder/requirements.txt, a list of the required packages (including jupytext)
  • .jupyter/jupyter_notebook_config.py with the following contents:
c.NotebookApp.contents_manager_class = "jupytext.TextFileContentsManager"
c.ContentsManager.preferred_jupytext_formats_read = "py:sphinx"
c.ContentsManager.sphinx_convert_rst2md = True

Our sample notebook is also represented in sphinx format here.

R knitr::spin scripts

The spin format implements these specifications:

  • Jupyter metadata are in YAML format, in a #' -commented header.
  • Markdown cells are commented with #' .
  • Code cells are exported verbatim. Cell metadata are signalled with #+. Cells end with a blank line, an explicit start of cell marker, or a markdown cell.

Jupyter Notebook or Jupyter Lab?

Jupytext works very well with the Jupyter Notebook editor, and we recommend that you get used to Jupytext within jupyter notebook first.

That being said, Jupytext also works well from Jupyter Lab. Please note that:

  • Jupytext's installation is identical in both Jupyter Notebook and Jupyter Lab
  • Jupyter Lab can open any paired notebook with .ipynb extension. Paired notebooks work exactly as in Jupyter Notebook: input cells are taken from the text notebook, and outputs from the .ipynb file. Both files are updated when the notebook is saved.
  • Pairing notebooks is slightly less convenient in Jupyter Lab than in Jupyter Notebook as Jupyter Lab has no notebook metadata editor yet. You will have to open the JSON representation of the notebook in an editor, find the notebook metadata and add the "jupytext" : {"formats": "ipynb,py"}, entry manually.
  • In Jupyter Lab, scripts or Markdown documents open as text by default. Open them as notebooks with the Open With -> Notebook context menu (available in Jupyter Lab 0.35 and above) as shown below:

Will my notebook really run in an IDE?

Well, that's what we expect. There's however a big difference in the python environments between Python IDEs and Jupyter: in most IDEs the code is executed with python and not in a Jupyter kernel. For this reason, jupytext comments Jupyter magics found in your notebook when exporting to all format but the plain Markdown one. Change this by adding a #escape or #noescape flag on the same line as the magic, or a "comment_magics": true or false entry in the notebook metadata, in the "jupytext" section. Or set your preference globally on the contents manager by adding this line to .jupyter/jupyter_notebook_config.py:

c.ContentsManager.comment_magics = True # or False

Also, you may want some cells to be active only in the Python, or R Markdown representation. For this, use the active cell metadata. Set "active": "ipynb" if you want that cell to be active only in the Jupyter notebook. And "active": "py" if you want it to be active only in the Python script. And "active": "ipynb,py" if you want it to be active in both, but not in the R Markdown representation...

Extending the light and percent formats to more languages

You want to extend the light and percent format to another language? Please let us know! In principle that is easy, and you will only have to:

  • document the language extension and comment by adding one line to _SCRIPT_EXTENSIONS in languages.py.
  • contribute a sample notebook in tests\notebooks\ipynb_[language].
  • add two tests in test_mirror.py: one for the light format, and another one for the percent format.
  • Make sure that the tests pass, and that the text representations of your notebook, found in tests\notebooks\mirror\ipynb_to_script and tests\notebooks\mirror\ipynb_to_percent, are valid scripts.

Jupytext's releases and backward compatibility

Jupytext will continue to evolve as we collect more feedback, and discover more ways to represent notebooks as text files. When a new release of Jupytext comes out, we make our best to ensure that it will not break your notebooks. Format changes will not happen often, and we try hard not to introduce breaking changes.

Jupytext tests the version format for paired notebook only. If the format version of the text representation is not the current one, Jupytext will refuse to open the paired notebook. You may want to update Jupytext if the format version of the file is newer than the one available in the installed Jupytext. Otherwise, you will have to choose between deleting (or renaming) either the .ipynb, or its paired text representation. Keep the one that is up-to-date, re-open your notebook, and Jupytext will regenerate the other file.

We also recommend that people who use Jupytext to collaborate on notebooks use identical versions of Jupytext.

I like this, how can I contribute?

Your feedback is precious to us: please let us know how we can improve jupytext. With enough feedback we will be able to transition from the current beta phase to a stable phase. Thanks for staring the project on GitHub. Sharing it is also very helpful! By the way: stay tuned for announcements and demos on medium and twitter!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

git-graph-0.0.1.dev1.tar.gz (26.6 kB view hashes)

Uploaded Source

Built Distribution

git_graph-0.0.1.dev1-py3-none-any.whl (15.6 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page