Skip to main content

User code executors for Backend.AI kernels

Project description

A common base runner for various programming languages.

It manages an internal task queue so that multiple command/code execution requests are processed in the FIFO order, without garbling the console output.

How to write a new computation kernel

Inherit ai.backend.kernel.BaseRunner and implement the following methods:

  • async def init_with_loop(self)

    • Called after the asyncio event loop becomes available.

    • Mostly just pass.

    • If your kernel supports interactive user input, then put set self.user_input_queue as an asyncio.Queue object. It’s your job to utilize the queue object for waiting for the user input. (See handle_input() method in ai/backend/kernel/python/inproc.py for reference) If it’s not set, then any attempts for getting interactive user input will simply return "<user-input is unsupported>".

  • async def build_heuristic(self)

    • (Batch mode) Write a heuristic code to find some build script or run a good-enough build command for your language/runtime.

    • (Blocking) You don’t have to worry about overlapped execution since the base runner will take care of it.

  • async def execute_heuristic(self)

    • (Batch mode) Write a heuristic code to find the main program.

    • (Blocking) You don’t have to worry about overlapped execution since the base runner will take care of it.

  • async def query(self, code_text)

    • (Query mode) Directly run the given code snippet. Depending on the language/runtime, you may need to create a temporary file and execute an external program.

    • (Blocking) You don’t have to worry about overlapped execution since the base runner will take care of it.

  • async def complete(self, data)

    • (Query mode) Take a dict data that includes the current line of code where the user is typing and return a list of strings that can auto-complete it.

    • (Non-blocking) You should implement this method to run asynchronously with ongoing code execution.

  • async def interrupt(self)

    • (Query mode) Send an interruption signal to the running program. The implementation is up to you. The Python runner currently spawns a thread for in-process query-mode execution and use a ctypes hack to throw KeyboardInterrupt exception into it.

    • (Non-blocking) You should implement this method to run asynchronously with ongoing code execution.

NOTE: Existing codes are good referecnes!

How to use in your Backend.AI computation kernels

Install this package using pip via a RUN instruction in Dockerfile. Then, set the CMD instruction like below:

CMD ["/home/backend.ai/jail", "-policy", "/home/backend.ai/policy.yml", \
     "/usr/local/bin/python", "-m", "ai.backend.kernel", "<language>"]

where <language> should be one of the supported language names defined in lang_map variable in ai/backend/kernel/__main__.py file.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

backend.ai-kernel-runner-1.4.2.tar.gz (22.8 kB view details)

Uploaded Source

Built Distribution

backend.ai_kernel_runner-1.4.2-py3-none-any.whl (40.3 kB view details)

Uploaded Python 3

File details

Details for the file backend.ai-kernel-runner-1.4.2.tar.gz.

File metadata

  • Download URL: backend.ai-kernel-runner-1.4.2.tar.gz
  • Upload date:
  • Size: 22.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.11.0 pkginfo/1.4.2 requests/2.19.1 setuptools/40.2.0 requests-toolbelt/0.8.0 tqdm/4.26.0 CPython/3.6.3

File hashes

Hashes for backend.ai-kernel-runner-1.4.2.tar.gz
Algorithm Hash digest
SHA256 84c0928fe2b61e0b182e6491f66bcfc0565f2eb90d17459cc079c29fd7f38afe
MD5 b034727a49ea531b2a483b3b28ca1118
BLAKE2b-256 ca8291ab323be2521455f0de0d7ea6dd2401bd8d7197b3c1627b0d5eb061b8aa

See more details on using hashes here.

File details

Details for the file backend.ai_kernel_runner-1.4.2-py3-none-any.whl.

File metadata

  • Download URL: backend.ai_kernel_runner-1.4.2-py3-none-any.whl
  • Upload date:
  • Size: 40.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.11.0 pkginfo/1.4.2 requests/2.19.1 setuptools/40.2.0 requests-toolbelt/0.8.0 tqdm/4.26.0 CPython/3.6.3

File hashes

Hashes for backend.ai_kernel_runner-1.4.2-py3-none-any.whl
Algorithm Hash digest
SHA256 8bf0b2719d7b0ff5c693da177dd99e248cb0958f8753d0fd7fe63320b0da9284
MD5 c8205428785a3699be168b7adee73c8c
BLAKE2b-256 afe3f56813a4564f53bd9ef38307e152783038130bd5c98f79ce0923e8d67d0f

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page