Skip to main content

A proactive kernel for Jupyter

Project description

# proactive-jupyter-kernel
ProActiveKernel for Jupyter

## 1. Requirements:

* Python 2 or 3

## 2. Installation:

### 2.1 Using Pypi

1) open a terminal

2) install the proactive jupyter kernel

$ pip install proactive-kernel --upgrade

### 2.2 Using source code

1) open a terminal

2) clone the repository on your local machine:

$ git clone
3) install the proactive jupyter kernel:

$ pip install proactive-jupyter-kernel/
$ python -m proactive-jupyter-kernel.install

## 3. Platform

You can use any jupyter platform.
We recommend the use of jupyter lab. To launch it from your terminal after having installed it:

$ nohup jupyter lab &>/dev/null &

When opened, click on the Proactive icon to open a notebook based on the proactive kernel.

## 4. Connect:

### 4.1 Using connect()

If you are trying proactive for the first time, please sign up on [try platform](
Once you receive your login and password, connect using the `#%connect()` pragma:

#%connect(login=YOUR_LOGIN, password=YOUR_PASSWORD)

To connect to another host, use the later pragma this way:

#%connect(host=YOUR_HOST, port=YOUR_PORT, login=YOUR_LOGIN, password=YOUR_PASSWORD)

### 4.2 Using config file:

For automatic sign in, create a file named 'proactive_config.ini' in your notebook's location.

Fill your configuration file according to the format:

host = YOUR_HOST
port = YOUR_PORT

login = YOUR_LOGIN
password = YOUR_PASSWORD

Save your file changes and restart the proactive kernel.

You can also force the current Kernel to connect using any .ini config file through the `#%connect()` pragma:


(for more information about this format please check

## 5. Usage

#### 5.1 Creating a Python task

To create a task, write your python implementation into a notebook block code (a default name
will be given to the created task):

print('Hello world')

Or you can provide more information about the task by using the `#%task()` pragma:

print('Hello world')

#### 5.2 Adding a fork environment

To configure a fork environment for a task, use the `#%fork_env()` pragma. A first way to do this
is by providing the name of the corresponding task, and the fork environment implementation after that:

containerName = 'activeeon/dlm3'
dockerRunCommand = 'docker run '
dockerParameters = '--rm '
paHomeHost = variables.get("PA_SCHEDULER_HOME")
paHomeContainer = variables.get("PA_SCHEDULER_HOME")
proActiveHomeVolume = '-v '+paHomeHost +':'+paHomeContainer+' '
workspaceHost = localspace
workspaceContainer = localspace
workspaceVolume = '-v '+localspace +':'+localspace+' '
containerWorkingDirectory = '-w '+workspaceContainer+' '
preJavaHomeCmd = dockerRunCommand + dockerParameters + proActiveHomeVolume + workspaceVolume + containerWorkingDirectory + containerName

A second way is by providing the name of the task, and the path of a .py file containing the fork environment code:

#%fork_env(name=TASK_NAME, path=./

#### 5.3 Adding a selection script

To add a selection script to a task, use the `#%selection_script()` pragma. A first way to do it,
provide the name of the corresponding task, and the selection code implementation after that:

selected = True

A second way is by providing the name of the task, and the path of a .py file containing the selection code:

#%selection_script(name=TASK_NAME, path=./

#### 5.4 Create a job

To create a job, use the `#%job()` pragma:


If the job was already been created, the call of this pragma would just rename the job already created by the new provided name.

Notice that it is not necessary to create and name explicitly the job. If not done by the user, this step is implicitly
performed when the job is submitted (check next section). In the later case, the job will be named same as your notebook.

#### 5.5 Submit your job to the scheduler

To finally submit your job to the proactive scheduler, use the `#%submit_job()` pragma:


The returned values of your final tasks will be automatically printed in the notebook results.

Current status


* connect, task, selection_script, fork_env, job, submit_job
* connection using a configuration file
* get and print results implicitly in submit_job


1. add task dependency
2. less spaces sensitivity in pragma's parsing
3. get_results pragma
4. check how to use NetworkX for plotting graphs
5. check how to highlight Python syntax

Project details

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Filename, size & hash SHA256 hash help File type Python version Upload date (12.9 kB) Copy SHA256 hash SHA256 Source None

Supported by

Elastic Elastic Search Pingdom Pingdom Monitoring Google Google BigQuery Sentry Sentry Error logging AWS AWS Cloud computing DataDog DataDog Monitoring Fastly Fastly CDN SignalFx SignalFx Supporter DigiCert DigiCert EV certificate StatusPage StatusPage Status page