Skip to main content

Backend.AI Storage Proxy

Project description

Backend.AI Storage Proxy

Backend.AI Storage Proxy is an RPC daemon to manage vfolders used in Backend.AI agent, with quota and storage-specific optimization support.

Package Structure

  • ai.backend.storage
    • server: The agent daemon which communicates between Backend.AI Manager
    • api.client: The client-facing API to handle tus.io server-side protocol for uploads and ranged HTTP queries for downloads.
    • api.manager: The manager-facing (internal) API to provide abstraction of volumes and separation of the hardware resources for volume and file operations.
    • vfs
      • The minimal fallback backend which only uses the standard Linux filesystem interfaces
    • xfs
      • XFS-optimized backend with a small daemon to manage XFS project IDs for quota limits
      • agent: Implementation of AbstractVolumeAgent with XFS support
    • purestorage
      • PureStorage's FlashBlade-optimized backend with RapidFile Toolkit (formerly PureTools)
    • netapp
      • NetApp QTree integration backend based on the NetApp ONTAP REST API
    • weka
      • Weka.IO integration backend with Weka.IO V2 REST API
    • cephfs (TODO)
      • CephFS-optimized backend with quota limit support

Installation

Prequisites

Installation Process

First, prepare the source clone of this agent:

# git clone https://github.com/lablup/backend.ai-storage-agent

From now on, let's assume all shell commands are executed inside the virtualenv.

Now install dependencies:

# pip install -U -r requirements/dist.txt  # for deployment
# pip install -U -r requirements/dev.txt   # for development

Then, copy halfstack.toml to root of the project folder and edit to match your machine:

# cp config/sample.toml storage-proxy.toml

When done, start storage server:

# python -m ai.backend.storage.server

It will start Storage Proxy daemon bound at 127.0.0.1:6021 (client API) and 127.0.0.1:6022 (manager API).

NOTE: Depending on the backend, the server may require to be run as root.

Production Deployment

To get performance boosts by using OS-provided sendfile() syscall for file transfers, SSL termination should be handled by reverse-proxies such as nginx and the storage proxy daemon itself should be run without SSL.

Filesystem Backends

VFS

Prerequisites

  • User account permission to access for the given directory
    • Make sure a directory such as /vfroot/vfs a directory or you want to mount exists

XFS

Prerequisites

  • Local device mounted under /vfroot
  • Native support for XFS filesystem
    • Mounting XFS volume with an option -o pquota to enable project quota
    • To turn on quotas on the root filesystem, the quota mount flags must be set with the rootflags= boot parameter. Usually, this is not recommended.
  • Access to root privilege
    • Execution of xfs_quota, which performs quota-related commands, requires the root privilege.
    • Thus, you need to start the Storage-Proxy service by a root user or a user with passwordless sudo access.
    • If the root user starts the Storage-Proxy, the owner of every file created is also root. In some situations, this would not be the desired setting. In that case, it might be better to start the service with a regular user with passwordless sudo privilege.

Creating virtual XFS device for testing

Create a virtual block device mounted to lo (loopback) if you are the only one to use the storage for testing:

  1. Create file with your desired size
# dd if=/dev/zero of=xfs_test.img bs=1G count=100
  1. Make file as XFS partition
# mkfs.xfs xfs_test.img
  1. Mount it to loopback
# export LODEVICE=$(losetup -f)
# losetup $LODEVICE xfs_test.img
  1. Create mount point and mount loopback device, with pquota option
# mkdir -p /vfroot/xfs
# mount -o loop -o pquota $LODEVICE /vfroot/xfs

Note on operation

XFS keeps quota mapping information on two files: /etc/projects and /etc/projid. If they are deleted or damaged in any way, per-directory quota information will also be lost. So, it is crucial not to delete them accidentally. If possible, it is a good idea to backup them to a different disk or NFS.

PureStorage FlashBlade

Prerequisites

  • NFSv3 export mounted under /vfroot
  • Purity API access

CephFS

Prerequisites

  • FUSE export mounted unde /vfroot

NetApp ONTAP

Prerequisites

  • NFSv3 export mounted under /vfroot
  • NetApp ONTAP API access
  • native NetApp XCP or Dockerized NetApp XCP container
  • Create Qtree in Volume explicitly using NetApp ONTAP Sysmgr GUI

Note on operation

The volume host of Backend.AI Storage proxy corresponds to Qtree of NetApp ONTAP, not NetApp ONTAP Volume.
Please DO NOT remove Backend.AI mapped qtree in NetApp ONTAP Sysmgr GUI. If not, you cannot access to NetApp ONTAP Volume through Backend.AI.

NOTE:
Qtree name in configuration file(storage-proxy.toml) must have the same name created in NetApp ONTAP Sysmgr.

Weka.IO

Prerequisites

  • Weka.IO agent installed and running
  • Weka.IO filesystem mounted under local machine, with permission set to somewhat storage-proxy process can read and write
  • Weka.IO REST API access (username/password/organization)

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

backend.ai-storage-proxy-23.3.0a3.tar.gz (44.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

backend.ai_storage_proxy-23.3.0a3-py3-none-any.whl (52.9 kB view details)

Uploaded Python 3

File details

Details for the file backend.ai-storage-proxy-23.3.0a3.tar.gz.

File metadata

File hashes

Hashes for backend.ai-storage-proxy-23.3.0a3.tar.gz
Algorithm Hash digest
SHA256 f2b856f38b3c78aa8fe4028bd42c5f6732a72c3226cce5728b0840d76a8f941f
MD5 adea185e8e74afa917761050ba6bab78
BLAKE2b-256 9a14d2951b7cf2b52c346163f0512fb06ee62b8d63b85c3ccbaa8473836ac74a

See more details on using hashes here.

File details

Details for the file backend.ai_storage_proxy-23.3.0a3-py3-none-any.whl.

File metadata

File hashes

Hashes for backend.ai_storage_proxy-23.3.0a3-py3-none-any.whl
Algorithm Hash digest
SHA256 1f1de6024f381e50d3e635be3de137db1fadbe43421e120fcfa172dbffcca561
MD5 14b2af2886ebc1eb8cea726e83ee5dc4
BLAKE2b-256 3c9b12779fa44cb04b3b3c7a38845b8fdce2cd569563ca86652fc3896be11a42

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page