Descriptive deep learning
Project description
Introduction
Welcome to Kur, the future of deep learning! Kur is the latest and greatest deep learning system because:
You can design, train, and evaluate models without ever needing to code.
You describe your model with easily undestandable concepts, rather than trudge through implementing the model in some lower-level language.
You can quickly iterate on newer and better versions of your model using easily defined hyperparameters and all the power of the Jinja2 templating engine.
COMING SOON: You can share your models (in whole or part) with the community, making it incredibly easy to collaborate on sophisticated models.
Checkout our homepage for complete documentation, including more examples and a tutorial!
What is Kur?
Kur is a system for quickly building and applying state-of-the-art deep learning models to new and exciting problems. Kur was designed to appeal to the entire machine learning community, from novices to veterans. It uses specification files that are simple to read and author, meaning that you can get started building sophisticated models without ever needing to code. Even so, Kur exposes a friendly and extensible API to support advanced deep learning architectures or workflows. Excited? Keep reading!
Get the Code
Kur is really easy to install! Kur runs on Python 3.4+ only, so if you are still running Python 2, you’ll need to install Python 3.
Once you have Python 3, you can pick one of these two options for installing Kur.
From PyPI
$ pip install kur
From GitHub
Just check it out and run the setup script:
$ git clone https://github.com/deepgram/kur
$ cd kur
$ pip install .
Troubleshooting
If you run into any problems installing or using Kur, please check out our troubleshooting page for lots of useful help. And if you want more detailed installation instructions, with help on setting up your environment, before sure to see our installation page.
Try It Out!
Remember, there are more examples on the homepage!
MNIST: Handwriting recognition
Let’s jump right in and see how awesome Kur is! The first example we’ll look at is Yann LeCun’s MNIST dataset. This is a dataset of 28x28 pixel images of individual handwritten digits between 0 and 9. The goal of our model will be to perform image recognition, tagging the image with the most likely digit it represents.
First, you need to Get the Code! If you install via pip, you’ll need to checkout the examples directory from the repository; if you install via git, then you alreay have the examples directory locally. So let’s move into the example directory:
$ cd examples
Now let’s train the MNIST model. This will download the data directly from the web, and then start training for 10 epochs.
$ kur train mnist.yml
Downloading: 100%|█████████████████████████████████| 9.91M/9.91M [03:44<00:00, 44.2Kbytes/s]
Downloading: 100%|█████████████████████████████████| 28.9K/28.9K [00:00<00:00, 66.1Kbytes/s]
Downloading: 100%|█████████████████████████████████| 1.65M/1.65M [00:31<00:00, 52.6Kbytes/s]
Downloading: 100%|█████████████████████████████████| 4.54K/4.54K [00:00<00:00, 19.8Kbytes/s]
Epoch 1/10, loss=1.750: 100%|███████████████████████| 320/320 [00:02<00:00, 154.81samples/s]
Validating, loss=1.102: 100%|██████████████████| 10000/10000 [00:05<00:00, 1737.00samples/s]
Epoch 2/10, loss=0.888: 100%|███████████████████████| 320/320 [00:01<00:00, 283.95samples/s]
Validating, loss=0.666: 100%|██████████████████| 10000/10000 [00:08<00:00, 1209.40samples/s]
Epoch 3/10, loss=0.551: 100%|███████████████████████| 320/320 [00:01<00:00, 269.09samples/s]
Validating, loss=0.504: 100%|██████████████████| 10000/10000 [00:08<00:00, 1221.64samples/s]
Epoch 4/10, loss=0.446: 100%|███████████████████████| 320/320 [00:01<00:00, 233.96samples/s]
Validating, loss=0.438: 100%|██████████████████| 10000/10000 [00:08<00:00, 1174.40samples/s]
Epoch 5/10, loss=0.544: 100%|███████████████████████| 320/320 [00:01<00:00, 269.47samples/s]
Validating, loss=0.398: 100%|██████████████████| 10000/10000 [00:08<00:00, 1235.31samples/s]
Epoch 6/10, loss=0.508: 100%|███████████████████████| 320/320 [00:01<00:00, 253.47samples/s]
Validating, loss=0.409: 100%|██████████████████| 10000/10000 [00:08<00:00, 1243.92samples/s]
Epoch 7/10, loss=0.464: 100%|███████████████████████| 320/320 [00:01<00:00, 263.46samples/s]
Validating, loss=0.384: 100%|██████████████████| 10000/10000 [00:08<00:00, 1209.80samples/s]
Epoch 8/10, loss=0.388: 100%|███████████████████████| 320/320 [00:01<00:00, 260.60samples/s]
Validating, loss=0.375: 100%|██████████████████| 10000/10000 [00:08<00:00, 1230.72samples/s]
Epoch 9/10, loss=0.485: 100%|███████████████████████| 320/320 [00:01<00:00, 278.96samples/s]
Validating, loss=0.428: 100%|██████████████████| 10000/10000 [00:08<00:00, 1228.11samples/s]
Epoch 10/10, loss=0.428: 100%|██████████████████████| 320/320 [00:01<00:00, 280.16samples/s]
Validating, loss=0.360: 100%|██████████████████| 10000/10000 [00:08<00:00, 1225.70samples/s]
What just happened? Kur downloaded the MNIST dataset from LeCun’s website, and then trained a model for ten epochs. Awesome!
Now let’s see how well our model actually performs:
$ kur evaluate mnist.yml
Evaluating: 100%|██████████████████████████████| 10000/10000 [00:05<00:00, 1767.62samples/s]
LABEL CORRECT TOTAL ACCURACY
0 968 980 98.8%
1 1097 1135 96.7%
2 867 1032 84.0%
3 931 1010 92.2%
4 903 982 92.0%
5 744 892 83.4%
6 838 958 87.5%
7 927 1028 90.2%
8 860 974 88.3%
9 825 1009 81.8%
ALL 8960 10000 89.6%
Wow! Across the board, we already have about 90% accuracy for recognizing handwritten digits. That’s how awesome Kur is. Excited yet? Try tweaking the mnist.yml file, and then continue the tutorial over on our homepage to see more awesome stuff!
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.