Introduction

Kubernetes Container Storage Interface (CSI) Documentation

This site documents how to develop, deploy, and test a Container Storage Interface (CSI) driver on Kubernetes.

The Container Storage Interface (CSI) is a standard for exposing arbitrary block and file storage storage systems to containerized workloads on Container Orchestration Systems (COs) like Kubernetes. Using CSI third-party storage providers can write and deploy plugins exposing new storage systems in Kubernetes without ever having to touch the core Kubernetes code.

The target audience for this site is third-party developers interested in developing CSI drivers for Kubernetes.

Kubernetes users interested in how to deploy or manage an existing CSI driver on Kubernetes should look at the documentation provided by the author of the CSI driver.

Kubernetes users interested in how to use a CSI driver should look at kubernetes.io documentation.

Kubernetes Releases

Kubernetes CSI Spec Compatibility Status
v1.9 v0.1.0 Alpha
v1.10 v0.2.0 Beta
v1.11 v0.3.0 Beta
v1.13 v0.3.0, v1.0.0 GA

Development and Deployment

Minimum Requirements (for Developing and Deploying a CSI driver for Kubernetes)

Kubernetes is as minimally prescriptive about packaging and deployment of a CSI Volume Driver as possible.

The only requirements are around how Kubernetes (master and node) components find and communicate with a CSI driver.

Specifically, the following is dictated by Kubernetes regarding CSI:

  • Kubelet to CSI Driver Communication
    • Kubelet directly issues CSI calls (like NodeStageVolume, NodePublishVolume, etc.) to CSI drivers via a Unix Domain Socket to mount and unmount volumes.
    • Kubelet discovers CSI drivers (and the Unix Domain Socket to use to interact with a CSI driver) via the kubelet plugin registration mechanism.
    • Therefore, all CSI drivers deployed on Kubernetes MUST register themselves using the kubelet plugin registration mechanism on each supported node.
  • Master to CSI Driver Communication
    • Kubernetes master components do not communicate directly (via a Unix Domain Socket or otherwise) with CSI drivers.
    • Kubernetes master components interact only with the Kubernetes API.
    • Therefore, CSI drivers that require operations that depend on the Kubernetes API (like volume create, volume attach, volume snapshot, etc.) MUST watch the Kubernetes API and trigger the appropriate CSI operations against it.

Because these requirements are minimally prescriptive, CSI driver developers are free to implement and deploy their drivers as they see fit.

That said, to ease development and deployment, the mechanism described below is recommended.

Recommended Mechanism (for Developing and Deploying a CSI driver for Kubernetes)

The Kubernetes development team has established a "Recommended Mechanism" for developing, deploying, and testing CSI Drivers on Kubernetes. It aims to reduce boilerplate code and simplify the overall process for CSI Driver developers.

This "Recommended Mechanism" makes use of the following components:

To implement a CSI driver using this mechanism, a CSI driver developer should:

  1. Create a containerized application implementing the Identity, Node, and optionally the Controller services described in the CSI specification (the CSI driver container).
  2. Unit test it using csi-sanity.
  3. Define Kubernetes API YAML files that deploy the CSI driver container along with appropriate sidecar containers.
  4. Deploy the driver on a Kubernetes cluster and run end-to-end functional tests on it.

Reference Links

Developing CSI Driver for Kubernetes

The first step to creating a CSI driver is writing an application implementing the gRPC services described in the CSI specification

At a minimum, CSI drivers must implement the following CSI services:

  • CSI Identity service
    • Enables callers (Kubernetes components and CSI sidecar containers) to identify the driver and what optional functionality it supports.
  • CSI Node service
    • Only NodePublishVolume, NodeUnpublishVolume, and NodeGetCapabilities are required.
    • Required methods enable callers to make a volume available at a specified path and discover what optional functionality the driver supports.

All CSI services may be implemented in the same CSI driver application. The CSI driver application should be containerized to make it easy to deploy on Kubernetes. Once containerized, the CSI driver can be paired with CSI Sidecar Containers and deployed in node and/or controller mode as appropriate.

Capabilities

If your driver supports additional features, CSI "capabilities" can be used to advertise the optional methods/services it supports, for example:

  • CONTROLLER_SERVICE (PluginCapability)
    • The entire CSI Controller service is optional. This capability indicates the driver implement one or more of the methods in the CSI Controller service.
  • VOLUME_ACCESSIBILITY_CONSTRAINTS (PluginCapability)
    • This capability indicates the volumes for this driver may not be equally accessible from all nodes in the cluster, and that the driver will return additional topology related information that Kubernetes can use to schedule workloads more intelligently or influence where a volume will be provisioned.
  • VolumeExpansion (PluginCapability)
    • This capability indicates the driver supports resizing (expanding) volumes after creation.
  • CREATE_DELETE_VOLUME (ControllerServiceCapability)
    • This capability indicates the driver supports dynamic volume provisioning and deleting.
  • PUBLISH_UNPUBLISH_VOLUME (ControllerServiceCapability)
    • This capability indicates the driver implements ControllerPublishVolume and ControllerUnpublishVolume -- operations that correspond to the Kubernetes volume attach/detach operations. This may, for example, result in a "volume attach" operation against the Google Cloud control plane to attach the specified volume to the specified node for the Google Cloud PD CSI Driver.
  • CREATE_DELETE_SNAPSHOT (ControllerServiceCapability)
    • This capability indicates the driver supports provisioning volume snapshots and the ability to provision new volumes using those snapshots.
  • CLONE_VOLUME (ControllerServiceCapability)
    • This capability indicates the driver supports cloning of volumes.
  • STAGE_UNSTAGE_VOLUME (NodeServiceCapability)
    • This capability indicates the driver implements NodeStageVolume and NodeUnstageVolume -- operations that correspond to the Kubernetes volume device mount/unmount operations. This may, for example, be used to create a global (per node) volume mount of a block storage device.

This is an partial list, please see the CSI spec for a complete list of capabilities. Also see the Features section to understand how a feature integrates with Kubernetes.

Kubernetes CSI Sidecar Containers

Kubernetes CSI Sidecar Containers are a set of standard containers that aim to simplify the development and deployment of CSI Drivers on Kubernetes.

These containers contain common logic to watch the Kubernetes API, trigger appropriate operations against the “CSI volume driver” container, and update the Kubernetes API as appropriate.

The containers are intended to be bundled with third-party CSI driver containers and deployed together as pods.

The containers are developed and maintained by the Kubernetes Storage community.

Use of the containers is strictly optional, but highly recommended.

Benefits of these sidecar containers include:

  • Reduction of "boilerplate" code.
    • CSI Driver developers do not have to worry about complicated, "Kubernetes specific" code.
  • Separation of concerns.
    • Code that interacts with the Kubernetes API is isolated from (and in a different container then) the code that implements the CSI interface.

The Kubernetes development team maintains the following Kubernetes CSI Sidecar Containers:

CSI external-provisioner

Status and Releases

Git Repository: https://github.com/kubernetes-csi/external-provisioner

Status: GA/Stable

Latests stable release Branch Compatible with CSI Version Container Image Min k8s Version Max k8s version
external-provisioner v1.0.1 release-1.0 v1.0.0 quay.io/k8scsi/csi-provisioner:v1.0.1 v1.13 -
external-provisioner v0.4.2 release-0.4 v0.3.0 quay.io/k8scsi/csi-provisioner:v0.4.2 v1.10 -

Description

The CSI external-provisioner is a sidecar container that watches the Kubernetes API server for PersistentVolumeClaim objects.

It calls CreateVolume against the specified CSI endpoint to provision a new volume.

Volume provisioning is triggered by the creation of a new Kubernetes PersistentVolumeClaim object, if the PVC references a Kubernetes StorageClass, and the name in the provisioner field of the storage class matches the name returned by the specified CSI endpoint in the GetPluginInfo call.

Once a new volume is successfully provisioned, the sidecar container creates a Kubernetes PersistentVolume object to represent the volume.

The deletion of a PersistentVolumeClaim object bound to a PersistentVolume corresponding to this driver with a delete reclaim policy causes the sidecar container to trigger a DeleteVolume operation against the specified CSI endpoint to delete the volume. Once the volume is successfully deleted, the sidecar container also deletes the PersistentVolume object representing the volume.

The CSI external-provisioner also supports the Snapshot DataSource. If a Snapshot CRD is specified as a data source on a PVC object, the sidecar container fetches the information about the snapshot by fetching the SnapshotContent object and populates the data source field in the resulting CreateVolume call to indicate to the storage system that the new volume should be populated using the specified snapshot.

StorageClass Parameters

When provisioning a new volume, the CSI external-provisioner sets the map<string, string> parameters field in the CSI CreateVolumeRequest call to the key/values specified in the StorageClass it is handling.

The CSI external-provisioner (v1.0.1+) also reserves the parameter keys prefixed with csi.storage.k8s.io/. Any keys prefixed with csi.storage.k8s.io/ are not passed to the CSI driver as an opaque parameter.

The following reserved StorageClass parameter keys trigger behavior in the CSI external-provisioner:

  • csi.storage.k8s.io/provisioner-secret-name
  • csi.storage.k8s.io/provisioner-secret-namespace
  • csi.storage.k8s.io/controller-publish-secret-name
  • csi.storage.k8s.io/controller-publish-secret-namespace
  • csi.storage.k8s.io/node-stage-secret-name
  • csi.storage.k8s.io/node-stage-secret-namespace
  • csi.storage.k8s.io/node-publish-secret-name
  • csi.storage.k8s.io/node-publish-secret-namespace
  • csi.storage.k8s.io/fstype

If the PVC VolumeMode is set to Filesystem, and the value of csi.storage.k8s.io/fstype is specified, it is used to populate the FsType in CreateVolumeRequest.VolumeCapabilities[x].AccessType and the AccessType is set to Mount.

For more information on how secrets are handled see Secrets & Credentials.

Example StorageClass:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: gold-example-storage
provisioner: exampledriver.example.com
parameters:
  disk-type: ssd
  csi.storage.k8s.io/fstype: ext4
  csi.storage.k8s.io/provisioner-secret-name: mysecret
  csi.storage.k8s.io/provisioner-secret-namespace: mynamespace

Usage

CSI drivers that support dynamic volume provisioning should use this sidecar container, and advertise the CSI CREATE_DELETE_VOLUME controller capability.

For detailed information (binary parameters, RBAC rules, etc.), see https://github.com/kubernetes-csi/external-provisioner/blob/master/README.md.

Deployment

The CSI external-provisioner is deployed as a controller. See deployment section for more details.

CSI external-attacher

Status and Releases

Git Repository: https://github.com/kubernetes-csi/external-attacher

Status: GA/Stable

Latests stable release Branch Compatible with CSI Version Container Image Min k8s Version Max k8s version
external-attacher v1.0.1 release-1.0 v1.0.0 quay.io/k8scsi/csi-attacher:v1.0.1 v1.13 -
external-attacher v0.4.2 release-0.4 v0.3.0 quay.io/k8scsi/csi-attacher:v0.4.2 v1.10 -

Description

The CSI external-attacher is a sidecar container that watches the Kubernetes API server for VolumeAttachment objects and triggers Controller[Publish|Unpublish]Volume operations against a CSI endpoint.

Usage

CSI drivers that require integrating with the Kubernetes volume attach/detach hooks should use this sidecar container, and advertise the CSI PUBLISH_UNPUBLISH_VOLUME controller capability.

For detailed information (binary parameters, RBAC rules, etc.), see https://github.com/kubernetes-csi/external-attacher/blob/master/README.md.

Deployment

The CSI external-attacher is deployed as a controller. See deployment section for more details.

CSI external-snapshotter

Status and Releases

Git Repository: https://github.com/kubernetes-csi/external-snapshotter

Status: Alpha

Latests stable release Branch Compatible with CSI Version Container Image Min k8s Version Max k8s version
external-snapshotter v1.0.1 release-1.0 v1.0.0 quay.io/k8scsi/csi-snapshotter:v1.0.1 v1.13 -
external-snapshotter v0.4.1 release-0.4 v0.3.0 quay.io/k8scsi/csi-snapshotter:v0.4.1 v1.10 -

Description

The CSI external-snapshotter is a sidecar container that watches the Kubernetes API server for VolumeSnapshot and VolumeSnapshotContent CRD objects.

The creation of a new VolumeSnapshot object referencing a SnapshotClass CRD object corresponding to this driver causes the sidecar container to trigger a CreateSnapshot operation against the specified CSI endpoint to provision a new snapshot. When a new snapshot is successfully provisioned, the sidecar container creates a Kubernetes VolumeSnapshotContent object to represent the new snapshot.

The deletion of a VolumeSnapshot object bound to a VolumeSnapshotContent corresponding to this driver with a delete reclaim policy causes the sidecar container to trigger a DeleteSnapshot operation against the specified CSI endpoint to delete the snapshot. Once the snapshot is successfully deleted, the sidecar container also deletes the VolumeSnapshotContent object representing the snapshot.

For detailed information about volume snapshot and restore functionality, see Volume Snapshot & Restore.

Usage

CSI drivers that support provisioning volume snapshots and the ability to provision new volumes using those snapshots should use this sidecar container, and advertise the CSI CREATE_DELETE_SNAPSHOT controller capability.

For detailed information (binary parameters, RBAC rules, etc.), see https://github.com/kubernetes-csi/external-snapshotter/blob/master/README.md.

Deployment

The CSI external-snapshotter is deployed as a controller. See deployment section for more details.

For an example deployment, see this example which deploys external-snapshotter and external-provisioner with the Hostpath CSI driver.

CSI node-driver-registrar

Status and Releases

Git Repository: https://github.com/kubernetes-csi/node-driver-registrar

Status: GA/Stable

Latests stable release Branch Compatible with CSI Version Container Image Min k8s Version Max k8s version
node-driver-registrar v1.0.2 release-1.0 v1.0.0 quay.io/k8scsi/csi-node-driver-registrar:v1.0.2 v1.13 -
driver-registrar v0.4.2 release-0.4 v0.3.0 quay.io/k8scsi/driver-registrar:v0.4.2 v1.10 -

Description

The CSI node-driver-registrar is a sidecar container that fetches driver information (using NodeGetInfo) from a CSI endpoint and registers it with the kubelet on that node using the kubelet plugin registration mechanism.

Usage

Kubelet directly issues CSI NodeGetInfo, NodeStageVolume, and NodePublishVolume calls against CSI drivers. It uses the kubelet plugin registration mechanism to discover the unix domain socket to talk to the CSI driver. Therefore, all CSI drivers should use this sidecar container to register themselves with kubelet.

For detailed information (binary parameters, etc.), see the README of the relevant branch.

Deployment

The CSI node-driver-registrar is deployed per node. See deployment section for more details.

CSI cluster-driver-registrar

Status and Releases

Git Repository: https://github.com/kubernetes-csi/cluster-driver-registrar

Status: Alpha

Latests stable release Branch Compatible with CSI Version Container Image Min k8s Version Max k8s version
cluster-driver-registrar v1.0.1 release-1.0 v1.0.0 quay.io/k8scsi/csi-cluster-driver-registrar:v1.0.1 v1.13 -
driver-registrar v0.4.2 release-0.4 v0.3.0 quay.io/k8scsi/driver-registrar:v0.4.2 v1.10 -

Description

The CSI cluster-driver-registrar is a sidecar container that registers a CSI Driver with a Kubernetes cluster by creating a CSIDriver Object which enables the driver to customize how Kubernetes interacts with it.

Usage

CSI drivers that use one of the following Kubernetes features should use this sidecar container:

  • Skip Attach
    • For drivers that don't support ControllerPublishVolume, this indicates to Kubernetes to skip the attach operation and eliminates the need to deploy the external-attacher sidecar.
  • Pod Info on Mount
    • This causes Kubernetes to pass metadata such as Pod name and namespace to the NodePublishVolume call.

If you are not using one of these features, this sidecar container (and the creation of the CSIDriver Object) is not required. However, it is still recommended, because the CSIDriver Object makes it easier for users to easily discover the CSI drivers installed on their clusters.

For detailed information (binary parameters, etc.), see the README of the relevant branch.

Deployment

The CSI cluster-driver-registrar is deployed as a controller. See deployment section for more details.

CSI livenessprobe

Status and Releases

Git Repository: https://github.com/kubernetes-csi/livenessprobe

Status: GA/Stable

Latests stable release Branch Compatible with CSI Version Container Image Min k8s Version Max k8s version
livenessprobe v1.0.2 release-1.0 v1.0.0 quay.io/k8scsi/livenessprobe:v1.0.2 v1.13 -
Unsupported. No 0.x branch. v0.3.0 quay.io/k8scsi/livenessprobe:v0.4.1 v1.10 -

Description

The CSI livenessprobe is a sidecar container that monitors the health of the CSI driver and reports it to Kubernetes via the Liveness Probe mechanism. This enables Kubernetes to automatically detect issues with the driver and restart the pod to try and fix the issue.

Usage

All CSI drivers should use the liveness probe to improve the availability of the driver while deployed on Kubernetes.

For detailed information (binary parameters, RBAC rules, etc.), see https://github.com/kubernetes-csi/livenessprobe/blob/master/README.md.

Deployment

The CSI livenessprobe is deployed as part of controller and node deployments. See deployment section for more details.

CSI CRDs

Status: Alpha

The Kubernetes CSI development team created a set of Custom Resource Definitions (CRDs).

There are currently two CRDs:

Definitions of the CRDs can be found here.

The CRDs are automatically deployed on Kubernetes via a Kubernetes Storage CRD addon https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/storage-crds. Ensure your Kubernetes Cluster deployment mechanisms (kops, etc.) enable the addon and/or installs the CRDs.

To verify the CRDs are installed, issue the following commands: kubectl get crd. Verify the result includes the CRDs. If deployment does not automatically install the CRD, CRDs may be manually installed via kubectl create -f {file}.yaml using the files here.

The schema definition for the custom resources (CRs) can be found here: https://github.com/kubernetes/csi-api/blob/master/pkg/apis/csi/v1alpha1/types.go

CSIDriver Object

Status

Alpha

What is the CSIDriver object?

The CSIDriver Kubernetes API object serves two purposes:

  1. Simplify driver discovery
  • If a CSI driver creates a CSIDriver object, Kubernetes users can easily discover the CSI Drivers installed on their cluster (simply by issuing kubectl get CSIDriver)
  1. Customizing Kubernetes behavior
  • Kubernetes has a default set of behaviors when dealing with CSI Drivers (for example, it calls the Attach/Detach operations by default). This object allows CSI drivers to specify how Kubernetes should interact with it.

What fields does the CSIDriver object have?

Here is an example of a v1alpha1 CSIDriver object:

apiVersion: csi.storage.k8s.io/v1alpha1
kind: CSIDriver
metadata:
  name: mycsidriver.example.com
spec:
  attachRequired: true
  podInfoOnMountVersion: v1

There are three important fields:

  • name
    • This should correspond to the full name of the CSI driver.
  • attachRequired
    • Indicates this CSI volume driver requires an attach operation (because it implements the CSI ControllerPublishVolume method), and that Kubernetes should call attach and wait for any attach operation to complete before proceeding to mounting.
    • If a CSIDriver object does not exist for a given CSI Driver, the default is true -- meaning attach will be called.
    • If a CSIDriver object exists for a given CSI Driver, but this field is not specified, it also defaults to true -- meaning attach will be called.
    • For more information see Skip Attach.
  • podInfoOnMountVersion
    • Indicates this CSI volume driver requires additional pod information (like pod name, pod UID, etc.) during mount operations.
    • If value is not specified, pod information will not be passed on mount.
    • If value is set to a valid version, Kubelet will pass pod information as volume_context in CSI NodePublishVolume calls.
    • Supported versions:
      • Version "v1" will pass the following additional fields in volume_context:
        • "csi.storage.k8s.io/pod.name": pod.Name
        • csi.storage.k8s.io/pod.namespace": pod.Namespace
        • csi.storage.k8s.io/pod.uid": string(pod.UID)
    • For more information see Pod Info on Mount.

What creates the CSIDriver object?

CSI drivers do not need to create the CSIDriver object directly. Instead they may use the cluster-driver-registrar sidecar container (customizing it as needed with startup parameters) -- when deployed with a CSI driver it automatically creates a CSIDriver CR representing the driver.

Enabling CSIDriver

The CSIDriver object is available as alpha starting with Kubernetes v1.12. Because it is an alpha feature, it is disabled by default. It is planned to be moved to beta in Kubernetes v1.14 and enabled by default.

To enable the use of CSIDriver on Kubernetes, do the following:

  1. Ensure the feature gate is enabled via the following Kubernetes feature flag: --feature-gates=CSIDriverRegistry=true
  2. Either ensure the CSIDriver CRD is automatically installed via the Kubernetes Storage CRD addon OR manually install the CSIDriver CRD on the Kubernetes cluster with the following command:
$> kubectl create -f https://raw.githubusercontent.com/kubernetes/csi-api/master/pkg/crd/manifests/csidriver.yaml

Listing registered CSI drivers

Using the CSIDriver CRD, it is now possible to query Kubernetes to get a list of registered drivers running in the cluster as shown below:

$> kubectl get csidrivers.csi.storage.k8s.io
NAME           AGE
csi-hostpath   2m

Or get a more detailed view of your registered driver with:

$> kubectl describe csidrivers.csi.storage.k8s.io
Name:         csi-hostpath
Namespace:
Labels:       <none>
Annotations:  <none>
API Version:  csi.storage.k8s.io/v1alpha1
Kind:         CSIDriver
Metadata:
  Creation Timestamp:  2018-10-04T21:15:30Z
  Generation:          1
  Resource Version:    390
  Self Link:           /apis/csi.storage.k8s.io/v1alpha1/csidrivers/csi-hostpath
  UID:                 9f854aa6-c81a-11e8-bdce-000c29e88ff1
Spec:
  Attach Required:            true
  Pod Info On Mount Version:
Events:                       <none>

CSINodeInfo Object

Status

Alpha

What is the CSINodeInfo object?

CSI drivers generate node specific information. Instead of storing this in the Kubernetes Node API Object, a new CSI specific Kubernetes CRD was created, the CSINodeInfo CRD.

It serves the following purposes:

  1. Mapping Kubernetes node name to CSI Node name,
  • The CSI GetNodeInfo call returns the name by which the storage system refers to a node. Kubernetes must use this name in future ControllerPublishVolume calls. Therefore, when a new CSI driver is registered, Kubernetes stores the storage system node ID in the CSINodeInfo object for future reference.
  1. Driver availability
  • A way for kubelet to communicate to the kube-controller-manager and kubernetes scheduler whether the driver is available (registered) on the node or not.
  1. Volume topology
  • The CSI GetNodeInfo call returns a set of keys/values labels identifying the topology of that node. Kubernetes uses this information to to do topology-aware provisioning (see PVC Volume Binding Modes for more details). It stores the key/values as labels on the Kubernetes node object. In order to recall which Node label keys belong to a specific CSI driver, the kubelet stores the keys in the CSINodeInfo object for future reference.

What fields does the CSINodeInfo object have?

Here is an example of a v1alpha1 CSINodeInfo object:

apiVersion: csi.storage.k8s.io/v1alpha1
kind: CSINodeInfo
metadata:
  name: node1
spec:
  drivers:
  - name: mycsidriver.example.com
    available: true
    volumePluginMechanism: csi-plugin
status:
  drivers:
  - name: mycsidriver.example.com
    nodeID: storageNodeID1
    topologyKeys: ['mycsidriver.example.com/regions', "mycsidriver.example.com/zones"]

Where the fields mean:

  • csiDrivers - list of CSI drivers running on the node and their properties.
  • driver - the CSI driver that this object refers to.
  • nodeID - the assigned identifier for the node as determined by the driver.
  • topologyKeys - A list of topology keys assigned to the node as supported by the driver.

What creates the CSINodeInfo object?

CSI drivers do not need to create the CSINodeInfo object directly. As long as they use the node-driver-registrar sidecar container, the kubelet will automatically populate the CSINodeInfo object for the CSI driver as part of kubelet plugin registration.

Enabling CSINodeInfo

The CSINodeInfo object is available as alpha starting with Kubernetes v1.12. Because it is an alpha feature, it is disabled by default.

To enable use of CSINodeInfo on Kubernetes, do the following:

  1. Ensure the feature gate is enabled with --feature-gates=CSINodeInfo=true
  2. Either ensure the CSIDriver CRD is automatically installed via the Kubernetes Storage CRD addon OR manually install the CSINodeInfo CRD on the Kubernetes cluster with the following command:
$> kubectl create -f https://raw.githubusercontent.com/kubernetes/csi-api/master/pkg/crd/manifests/csinodeinfo.yaml

Features

The Kubernetes implementation of CSI has multiple sub-features. This section describes these sub-features, their status (although support for CSI in Kubernetes is GA/stable, support of sub-features moves independently so sub-features maybe alpha or beta), and how to integrate them in to your CSI Driver.

Secrets and Credentials

CSI Driver Secrets

Some drivers may require a secret in order to issue operations against a backend (a service account, for example). If this secret is required at the "per driver" granularity (and not different "per CSI operation" or "per volume"), the secret may be injected in to CSI driver pods via standard Kubernetes secret distribution mechanisms.

CSI Operation Secrets

The CSI spec also accepts secrets in each of the following protos:

  • CreateVolumeRequest
  • DeleteVolumeRequest
  • ControllerPublishVolumeRequest
  • ControllerUnpublishVolumeRequest
  • CreateSnapshotRequest
  • DeleteSnapshotRequest
  • ControllerExpandVolumeRequest
  • NodeStageVolumeRequest
  • NodePublishVolumeRequest

These enable CSI drivers to accept/require "per CSI operation" or "per volume" secrets (a volume encryption key, for example).

The CSI external-provisioner enables Kubernetes cluster admins to populate the secret fields for these protos with data from Kubernetes Secret objects. For example:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: fast-storage
provisioner: csi-driver.team.example.com
parameters:
  type: pd-ssd
  csi.storage.k8s.io/provisioner-secret-name: fast-storage-provision-key
  csi.storage.k8s.io/provisioner-secret-namespace: pd-ssd-credentials

Create/Delete Volume Secret

The CSI external-provisioner (v1.0.1+) looks for the following keys in StorageClass.parameters:

  • csi.storage.k8s.io/provisioner-secret-name
  • csi.storage.k8s.io/provisioner-secret-namespace

The value of both parameters refers to the name and namespace of the Secret object in the Kubernetes API.

The value of both parameters may be a literal or a template containing the following variable that are automatically replaced by the external-provisioner at provision time:

* `${pv.name}`
  * Automatically replaced with the name of the `PersistentVolume` object being provisioned at provision.

If specified, the CSI external-provisioner will attempt to fetch the secret before provisioning and deletion.

If no such secret exists in the Kubernetes API, or the provisioner is unable to fetch it, the provision or delete operation fails.

If the secret is retrieved successfully, the provisioner passes it to the CSI driver in the CreateVolumeRequest.secrets or DeleteVolumeRequest.secrets field.

Controller Publish/Unpublish Secret

The CSI external-provisioner (v1.0.1+) looks for the following keys in StorageClass.parameters:

  • csi.storage.k8s.io/controller-publish-secret-name
  • csi.storage.k8s.io/controller-publish-secret-namespace

The value of both parameters refers to the name and namespace of the Secret object in the Kubernetes API.

The value of both parameters may be a literal or a template containing the following variables that are automatically replaced by the external-provisioner at provision time:

  • ${pv.name}
    • Automatically replaced with the name of the PersistentVolume object being provisioned.
  • ${pvc.namespace}
    • Automatically replaced with the namespace of the PersistentVolumeClaim object being provisioned.

The value of csi.storage.k8s.io/controller-publish-secret-namespace also supports the following template variables which are automatically replaced by the external-provisioner at provision time:

  • ${pvc.name}
    • Automatically replaced with the name of the PersistentVolumeClaim object being provisioned.
  • ${pvc.annotations['<ANNOTATION_KEY>']} (e.g. ${pvc.annotations['example.com/key']})
    • Automatically replaced with the value of the specified annotation from the PersistentVolumeClaim object being provisioned.

If specified, once provisioning is successful, the CSI external-provisioner sets the CSIPersistentVolumeSource.ControllerPublishSecretRef field in the new PersistentVolume object to refer to this secret.

If specified, the CSI external-attacher attempts to fetch the secret referenced by the CSIPersistentVolumeSource.ControllerPublishSecretRef before an attach or detach operation.

If no such secret exists in the Kubernetes API, or the external-attacher is unable to fetch it, the attach or detach operation fails.

If the secret is retrieved successfully, the external-attacher passes it to the CSI driver in the ControllerPublishVolumeRequest.secrets or ControllerUnpublishVolumeRequest.secrets field.

Node Stage Secret

The CSI external-provisioner (v1.0.1+) looks for the following keys in StorageClass.parameters:

  • csi.storage.k8s.io/node-stage-secret-name
  • csi.storage.k8s.io/node-stage-secret-namespace

The value of both parameters refers to the name and namespace of the Secret object in the Kubernetes API.

The value of both parameters may be a literal or a template containing the following variables that are automatically replaced by the external-provisioner at provision time:

  • ${pv.name}
    • Automatically replaced with the name of the PersistentVolume object being provisioned.
  • ${pvc.namespace}
    • Automatically replaced with the namespace of the PersistentVolumeClaim object being provisioned.

The value of csi.storage.k8s.io/node-stage-secret-namespace also supports the following template variables which are automatically replaced by the external-provisioner at provision time:

  • ${pvc.name}
    • Automatically replaced with the name of the PersistentVolumeClaim object being provisioned.
  • ${pvc.annotations['<ANNOTATION_KEY>']} (e.g. ${pvc.annotations['example.com/key']})
    • Automatically replaced with the value of the specified annotation from the PersistentVolumeClaim object being provisioned.

If specified, once provisioning is successful, the CSI external-provisioner sets the CSIPersistentVolumeSource.NodeStageSecretRef field in the new PersistentVolume object to refer to this secret.

If specified, the Kubernetes kubelet, attempts to fetch the secret referenced by the CSIPersistentVolumeSource.NodeStageSecretRef field before a mount device operation.

If no such secret exists in the Kubernetes API, or the kubelet is unable to fetch it, the mount device operation fails.

If the secret is retrieved successfully, the kubelet passes it to the CSI driver in the NodeStageVolumeRequest.secrets field.

Node Publish Secret

The CSI external-provisioner (v1.0.1+) looks for the following keys in StorageClass.parameters:

  • csi.storage.k8s.io/node-publish-secret-name
  • csi.storage.k8s.io/node-publish-secret-namespace

The value of both parameters refers to the name and namespace of the Secret object in the Kubernetes API.

The value of both parameters may be a literal or a template containing the following variables that are automatically replaced by the external-provisioner at provision time:

  • ${pv.name}
    • Automatically replaced with the name of the PersistentVolume object being provisioned.
  • ${pvc.namespace}
    • Automatically replaced with the namespace of the PersistentVolumeClaim object being provisioned.

The value of csi.storage.k8s.io/node-publish-secret-namespace also supports the following template variables which are automatically replaced by the external-provisioner at provision time:

  • ${pvc.name}
    • Automatically replaced with the name of the PersistentVolumeClaim object being provisioned.
  • ${pvc.annotations['<ANNOTATION_KEY>']} (e.g. ${pvc.annotations['example.com/key']})
    • Automatically replaced with the value of the specified annotation from the PersistentVolumeClaim object being provisioned.

If specified, once provisioning is successful, the CSI external-provisioner sets the CSIPersistentVolumeSource.NodePublishSecretRef field in the new PersistentVolume object to refer to this secret.

If specified, the Kubernetes kubelet, attempts to fetch the secret referenced by the CSIPersistentVolumeSource.NodePublishSecretRef field before a mount operation.

If no such secret exists in the Kubernetes API, or the kubelet is unable to fetch it, the mount operation fails.

If the secret is retrieved successfully, the kubelet passes it to the CSI driver in the NodePublishVolumeRequest.secrets field.

For example, consider this StorageClass:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: fast-storage
provisioner: csi-driver.team.example.com
parameters:
  type: pd-ssd
  csiNodePublishSecretName: ${pvc.annotations['team.example.com/key']}
  csiNodePublishSecretNamespace: ${pvc.namespace}

This StorageClass instructs the CSI provisioner to do the following:

  • Create a PersistentVolume with:
    • a "node publish secret" in the same namespace as the PersistentVolumeClaim that triggered the provisioning, with a name specified as an annotation on the PersistentVolumeClaim. This could be used to give the creator of the PersistentVolumeClaim the ability to specify a secret containing a decryption key they have control over.

Handling Sensitive Information

CSI Drivers that accept secrets SHOULD handle this data carefully. It may contain sensitive information and MUST be treated as such (e.g. not logged).

To make it easier to handle secret fields (e.g. strip them from CSI protos when logging), the CSI spec defines a decorator (csi_secret) on all fields containing sensitive information. Any fields decorated with csi_secret MUST be treated as if they contain sensitive information (e.g. not logged, etc.).

The Kubernetes CSI development team also provides a GO lang package called protosanitizer that CSI driver developers may be used to remove values for all fields in a gRPC messages decorated with csi_secret. The library can be found in kubernetes-csi/csi-lib-utils/protosanitizer. The Kubernetes CSI Sidecar Containers and sample drivers use this library to ensure no sensitive information is logged.

Snapshot & Restore Feature

Status: Alpha

Many storage systems provide the ability to create a "snapshot" of a persistent volume. A snapshot represents a point-in-time copy of a volume. A snapshot can be used either to provision a new volume (pre-populated with the snapshot data) or to restore the existing volume to a previous state (represented by the snapshot).

Kubernetes CSI currently enables CSI Drivers to expose the following functionality via the Kubernetes API:

  1. Creation and deletion of volume snapshots via Kubernetes native API.
  2. Creation of new volumes pre-populated with the data from a snapshot via Kubernetes dynamic volume provisioning.

Implementing Snapshot & Restore Functionality

To implement the snapshot feature, a CSI driver MUST:

  • Implement the CREATE_DELETE_SNAPSHOT and, optionally, the LIST_SNAPSHOTS controller capabilities
  • Implement CreateSnapshot, DeleteSnapshot, and, optionally, the ListSnapshots, controller RPCs.

For details, see the CSI spec.

Deploying Snapshot & Restore Functionality

The Kubernetes CSI development team maintains the external-snapshotter Kubernetes CSI Sidecar Containers. This sidecar container implements the logic for watching the Kubernetes API for snapshot objects and issuing the appropriate CSI snapshot calls against a CSI endpoint. For more details, see external-snapshotter documentation.

Snapshot APIs

Similar to the API for managing Kubernetes Persistent Volumes, the Kubernetes Volume Snapshots introduce three new API objects for managing snapshots: VolumeSnapshot, VolumeSnapshotContent, and VolumeSnapshotClass. See Kubernetes Snapshot documentation for more details.

Unlike the core Kubernetes Persistent Volume objects, these Snapshot objects are defined as Custom Resource Definitions (CRDs). This is because the Kubernetes project is moving away from having resource types pre-defined in the API server. This allows the API server to be reused for projects other than Kubernetes, and consumers (like Kubernetes) simply install the resource types they require as CRDs. Because the Snapshot API types are not built in to Kubernetes, they must be installed prior to use.

The CRDs are automatically deployed by the CSI external-snapshotter sidecar.

The schema definition for the custom resources (CRs) can be found here: https://github.com/kubernetes-csi/external-snapshotter/blob/master/pkg/apis/volumesnapshot/v1alpha1/types.go

In addition to these new CRD objects, a new, alpha DataSource field has been added to the PersistentVolumeClaim object. This new field enables dynamic provisioning of new volumes that are automatically pre-populated with data from an existing snapshot.

Since volume snapshot is an alpha feature in Kubernetes v1.12, you need to enable a new alpha feature gate called VolumeSnapshotDataSource in the Kubernetes API server binary.

--feature-gates=VolumeSnapshotDataSource=true

Test Snapshot Feature

Use the following example yaml files to test the snapshot feature.

Create a StorageClass:

kubectl create -f storageclass.yaml

Create a PVC:

kubectl create -f pvc.yaml

Create a VolumeSnapshotClass:

kubectl create -f snapshotclass.yaml

Create a VolumeSnapshot:

kubectl create -f snapshot.yaml

Create a PVC from a VolumeSnapshot:

kuberctl create -f restore.yaml

PersistentVolumeClaim not Bound

If a PersistentVolumeClaim is not bound, the attempt to create a volume snapshot from that PersistentVolumeClaim will fail. No retries will be attempted. An event will be logged to indicate that the PersistentVolumeClaim is not bound.

Note that this could happen if the PersistentVolumeClaim spec and the VolumeSnapshot spec are in the same YAML file. In this case, when the VolumeSnapshot object is created, the PersistentVolumeClaim object is created but volume creation is not complete and therefore the PersistentVolumeClaim is not yet bound. You must wait until the PersistentVolumeClaim is bound and then create the snapshot.

Examples

The following CSI drivers implement the snapshot feature and maybe referred to for example implementations:

CSI Topology Feature

This page is still under active development.

Status: Alpha

Some storage systems expose volumes that are not equally accessible by all nodes in a Kubernetes cluster. Instead volumes may be constrained to some subset of node(s) in the cluster. The cluster may be segmented into, for example, “racks” or “regions” and “zones” or some other grouping, and a given volume may be accessible only from one of those groups.

To enable orchestration systems, like Kubernetes, to work well with storage systems which expose volumes that are not equally accessible by all nodes, the CSI spec enables:

  1. Ability for a CSI Driver to opaquely specify where a particular node exists (e.g. "node A" is in "zone 1").
  2. Ability for Kubernetes (users or components) to influence where a volume is provisioned (e.g. provision new volume in either "zone 1" or "zone 2").
  3. Ability for a CSI Driver to opaquely specify where a particular volume exists (e.g. "volume X" is accessible by all nodes in "zone 1" and "zone 2").

Kubernetes and the Kubernetes CSI Sidecar Containers use these abilities to make intelligent scheduling and provisioning decisions (that Kubernetes can both influence and act on topology information for each volume),

Implementing Topology

TODO: Explain the CSI calls and capabilities that must be implemented. TODO: Explain what CSI CRDs the feature depends on.

Usage

In order to support topology-aware dynamic provisioning mechanisms available in Kubernetes, the external-provisioner must have the Topology feature enabled:

--feature-gates=Topology=true

In addition, in the Kubernetes cluster the CSINodeInfo alpha feature must be enabled (refer to the CSINodeInfo Object section for more info):

--feature-gates=CSINodeInfo=true

The KubeletPluginsWatcher feature must also be enabled (GA and enabled by default in Kubernetes 1.13).

Storage Internal Topology

Note that a storage system may also have an "internal topology" different from (independent of) the topology of the cluster where workloads are scheduled. Meaning volumes exposed by the storage system are equally accessible by all nodes in the Kubernetes cluster, but the storage system has some internal topology that may influence, for example, the performance of a volume from a given node.

CSI does not currently expose a first class mechanism to influence such storage system internal topology on provisioning. Therefore, Kubernetes can not programmatically influence such topology. However, a CSI Driver may expose the ability to specify internal storage topology during volume provisioning using an opaque parameter in the CreateVolume CSI call (CSI enables CSI Drivers to expose an arbitrary set of configuration options during dynamic provisioning by allowing opaque parameters to be passed from cluster admins to the storage plugins) -- this would enable cluster admins to be able to control the storage system internal topology during provisioning.

Raw Block Volume Feature

This page is still under active development.

Status: Alpha

This page documents how to implement raw block volume support to a CSI Driver.

A block volume is a volume that will appear as a block device inside the container. A mounted (file) volume is volume that will be mounted using a specified file system and appear as a directory inside the container.

The CSI spec supports both block and mounted (file) volumes.

While Kubernetes support of mounted (file) volumes is GA/stable, support for block volume in Kubernetes was introduced in v1.9, and promoted to beta in Kubernetes 1.13.

The Kubernetes CSI tooling support for block volumes is still alpha.

Usage

Because this feature is still alpha it is disabled by default in Kubernetes. You must enable the feature on Kubernetes:

--feature-gates=BlockVolume=true,CSIBlockVolume=true...

Implementing Raw Block Volume Support

  • TODO: detail how to expose raw block volume support in CSI Driver.
  • TODO: explain how raw block differs from mounted (file)
  • TODO: answer: can a CSI driver choose to implement only raw block and not mounted (file)?
  • TODO: detail the level of raw block volume functionality the CSI Sidecar containers currently provide.
  • TODO: detail how Kubernetes API raw block fields get mapped to CSI methods/fields.

Skip Kubernetes Attach and Detach

Status: Alpha

Problem

Volume drivers, like NFS, for example, have no concept of an attach (ControllerPublishVolume). However, Kubernetes always executes Attach and Detach operations even if the CSI driver does not implement an attach operation (i.e. even if the CSI Driver does not implement a ControllerPublishVolume call).

This was problematic because it meant all CSI drivers had to handle Kubernetes attachment. CSI Drivers that did not implement the PUBLISH_UNPUBLISH_VOLUME controller capability could work around this by deploying an external-attacher and the external-attacher would responds to Kubernetes attach operations and simply do a noop (because the CSI driver did not advertise the PUBLISH_UNPUBLISH_VOLUME controller capability).

Although the workaround works, it adds an unnecessary operation (round-trip) in the preparation of a volume for a container, and requires CSI Drivers to deploy an unnecessary sidecar container (external-attacher).

Skip Attach with CSI Driver Object

The CSIDriver Object enables CSI Drivers to specify how Kubernetes should interact with it.

Specifically the attachRequired field instructs Kubernetes to skip any attach operation altogether.

For example, the existence of the following object would cause Kubernetes to skip attach operations for all CSI Driver testcsidriver.example.com volumes.

apiVersion: csi.storage.k8s.io/v1alpha1
kind: CSIDriver
metadata:
  name: testcsidriver.example.com
spec:
  attachRequired: false

The easiest way to use this feature is to deploy the cluster-driver-registrar sidecar container. Once the flags to this container are configured correctly, it will automatically create a CSIDriver Object when it starts with the correct fields set.

Pod Info on Mount

Status: Alpha

Problem

CSI avoids encoding Kubernetes specific information in to the specification, since it aims to support multiple orchestration systems (beyond just Kubernetes).

This can be problematic because some CSI drivers require information about the workload (e.g. which pod is referencing this volume), and CSI does not provide this information natively to drivers.

Pod Info on Mount with CSI Driver Object

The CSIDriver Object enables CSI Drivers to specify how Kubernetes should interact with it.

Specifically the podInfoOnMountVersion field instructs Kubernetes that the CSI driver requires additional pod information (like podName, podUID, etc.) during mount operations.

For example, the existence of the following object would cause Kubernetes to add pod information at mount time to the NodePublishVolumeRequest.volume_context map.

apiVersion: csi.storage.k8s.io/v1alpha1
kind: CSIDriver
metadata:
  name: testcsidriver.example.com
spec:
  podInfoOnMountVersion: v1

There is only one podInfoOnMountVersion version currently supported: v1.

The value v1 for podInfoOnMountVersion will result in the following key/values being added to publish_context:

  • csi.storage.k8s.io/pod.name: {pod.Name}
  • csi.storage.k8s.io/pod.namespace: {pod.Namespace}
  • csi.storage.k8s.io/pod.uid: {pod.UID}

The easiest way to use this feature is to deploy the cluster-driver-registrar sidecar container. Once the flags to this container are configured correctly, it will automatically create a CSIDriver Object when it starts with the correct fields set.

Ephemeral Local Volumes

This page is still under active development.

Status: Alpha

Kubernetes supports three types of volumes:

  1. Remote Persistent Volumes
  2. Local Persistent Volumes
  3. Local Ephemeral Volumes

The initial focus of Kubernetes CSI was Remote Persistent Volumes. However, the goal is for CSI to support all three types.

This page documents how to create "Local Ephemeral Volumes" for Kubernetes using CSI.

What is a Local Ephemeral Volumes?

A Local Ephemeral Volumes is a volume whose lifecycle is tied to the lifecycle of a single pod:

  • The volume is "provisioned" (either empty or with some pre-populated data) when the pod is created.
  • The volume is deleted when the pod is terminated.

Kubernetes Secret Volumes are a good example (non-CSI) of a local ephemeral volumes.

How to write a CSI Driver for Local Ephemeral Volumes

The following features make it easier to develop CSI Drivers that expose local ephemeral volumes:

  • Pod Info on Mount
    • This feature provides the CSI driver pod information at mount time. Many ephemeral volumes write some files at mount time. Often the data they write depends on the the pod they are operating on.
  • Skip Attach
    • This instructs Kubernetes to skip any attach operation (ControllerPublishVolume) altogether. Local ephemeral volume drivers generally do not have or need a cluster control plane component.

Features currently in development to improve Local Ephemeral Volume support:

  • Inline Volume Support
    • Having to create a PV and a PVC for every ephemeral volume is onerous. Being able to specify a volume inside a pod definition (not currently possible for CSI drivers) will make that easier.

Deploying CSI Driver on Kubernetes

This page is out-of-date and under active development.

This page describes to CSI driver developers how to deploy their driver onto a Kubernetes cluster.

Overview

There are three components plus the kubelet that enable CSI drivers to provide storage to Kubernetes. These components are sidecar containers which are responsible for communication with both Kubernetes and the CSI driver, making the appropriate CSI calls for their respectful Kubernetes events.

Sidecar Containers

sidecar-container

Sidecar containers manage Kubernetes events and make the appropriate calls to the CSI driver. These are the external attacher, external provisioner, external snapshotter and the driver registrar.

External Attacher

external-attacher is a sidecar container that watches Kubernetes VolumeAttachment objects and triggers CSI ControllerPublish and ControllerUnpublish operations against a driver endpoint. As of this writing, the external attacher does not support leader election and therefore there can be only one running per CSI driver. For more information please read Attaching and Detaching.

Note, even though this is called the external attacher, its function is to call the CSI API calls ControllerPublish and ControllerUnpublish. These calls most likely will occur in a node which is not the one that will mount the volume. For this reason, many CSI drivers do not support these calls, instead doing the attach/detach and mount/unmount both in the CSI NodePublish and NodeUnpublish calls done by the kubelet at the node which is supposed to mount.

External Provisioner

external-provisioner is a Sidecar container that watches Kubernetes PersistentVolumeClaim objects and triggers CSI CreateVolume and DeleteVolume operations against a driver endpoint. For more information please read Provisioning and Deleting.

External Snapshotter

external-snapshotter is a Sidecar container that watches Kubernetes VolumeSnapshot objects and triggers CSI CreateSnapshot and DeleteSnapshot operations against a driver endpoint. For more information please read Snapshot Design Proposal.

Driver Registrar

driver-registrar is a sidecar container that registers the CSI driver with kubelet, and adds the drivers custom NodeId to a label on the Kubernetes Node API Object. It does this by communicating with the Identity service on the CSI driver and also calling the CSI GetNodeId operation. The driver registrar must have the Kubernetes name for the node set through the environment variable KUBE_NODE_NAME as follows:

        - name: csi-driver-registrar
          imagePullPolicy: Always
          image: quay.io/k8scsi/driver-registrar:v0.2.0
          args:
            - "--v=5"
            - "--csi-address=$(ADDRESS)"
          env:
            - name: ADDRESS
              value: /csi/csi.sock
            - name: KUBE_NODE_NAME
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
          volumeMounts:
            - name: socket-dir
              mountPath: /csi

Kubelet

kubelet

The Kubernetes kubelet runs on every node and is responsible for making the CSI calls NodePublish and NodeUnpublish. These calls mount and unmount the storage volume from the storage system, making it available to the Pod to consume. As shown in the external-attacher, most CSI drivers choose to implement both their attach/detach and mount/unmount calls in the NodePublish and NodeUnpublish calls. They do this because the kubelet makes the request on the node which is to consume the volume.

Mount point

The mount point used by the CSI driver must be set to Bidirectional. See the example below:

          volumeMounts:
            - name: socket-dir
              mountPath: /csi
            - name: mountpoint-dir
              mountPath: /var/lib/kubelet/pods
              mountPropagation: "Bidirectional"
      volumes:
        - name: socket-dir
          hostPath:
            path: /var/lib/kubelet/plugins/csi-hostpath
            type: DirectoryOrCreate
        - name: mountpoint-dir
          hostPath:
            path: /var/lib/kubelet/pods
            type: Directory

RBAC Rules

Side car containers need the appropriate permissions to be able to access and manipulate Kubernetes objects. Here are the RBAC rules needed:

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: csi-hostpath-role
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["create", "delete", "get", "list", "watch", "update"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["volumeattachments"]
    verbs: ["get", "list", "watch", "update"]

Deploying

Deploying a CSI driver onto Kubernetes is highlighted in detail in Recommended Mechanism for Deploying CSI Drivers on Kubernetes.

Enable privileged Pods

To use CSI drivers, your Kubernetes cluster must allow privileged pods (i.e. --allow-privileged flag must be set to true for both the API server and the kubelet). This is the default in some environments (e.g. GCE, GKE, kubeadm).

Ensure your API server are started with the privileged flag:

$ ./kube-apiserver ...  --allow-privileged=true ...
$ ./kubelet ...  --allow-privileged=true ...

Note: Starting from Kubernetes 1.13.0, --allow-privileged is true for kubelet. It'll be deprecated in future kubernetes releases.

Enabling mount propagation

Another feature that CSI depends on is mount propagation. It allows the sharing of volumes mounted by one container with other containers in the same pod, or even to other pods on the same node. For mount propagation to work, the Docker daemon for the cluster must allow shared mounts. See the [mount propagation docs][mount-propagation-docs] to find out how to enable this feature for your cluster. [This page][docker-shared-mount] explains how to check if shared mounts are enabled and how to configure Docker for shared mounts.

Examples

  • Simple deployment example using a single pod for all components: see the hostpath example.
  • Full deployment example using a DaemonSet for the node plugin and StatefulSet for the controller plugin: check the NFS driver deployment files.

More information

For more information, please read CSI Volume Plugins in Kubernetes Design Doc.

Example

This page is out-of-date and under active development.

The HostPath can be used to provision local storage in a single node test. This section shows how to deploy and use that driver in Kubernetes.

The deployment of a CSI driver determines which RBAC rules are needed. For example, enabling or disabling leadership election changes which permissions the external-attacher and external-provisioner need. This example deployment uses the original RBAC rule files that are maintained together with those sidecar apps and deploys into the default namespace.

A real production should copy the RBAC files and customize them as explained in the comments of those files.

Deployment

This was initially tested with Kubernetes v1.12 and should still work there. It was also tested with a 1.13 pre-release snapshot. To ensure that all necessary features are enabled, set the following feature gate flags to true:

--feature-gates=CSIPersistentVolume=true,MountPropagation=true,VolumeSnapshotDataSource=true,KubeletPluginsWatcher=true,CSINodeInfo=true,CSIDriverRegistry=true

CSIPersistentVolume is enabled by default in v1.10. MountPropagation is enabled by default in v1.10. VolumeSnapshotDataSource is a new alpha feature in v1.12. KubeletPluginsWatcher is enabled by default in v1.12. CSINodeInfo and CSIDriverRegistry are new alpha features in v1.12.

CRDs need to be created manually for CSIDriverRegistry and CSINodeInfo:

$ kubectl create -f https://raw.githubusercontent.com/kubernetes/csi-api/ab0df28581235f5350f27ce9c27485850a3b2802/pkg/crd/testdata/csidriver.yaml --validate=false customresourcedefinition.apiextensions.k8s.io/csidrivers.csi.storage.k8s.io created

$ kubectl create -f https://raw.githubusercontent.com/kubernetes/csi-api/ab0df28581235f5350f27ce9c27485850a3b2802/pkg/crd/testdata/csinodeinfo.yaml --validate=false customresourcedefinition.apiextensions.k8s.io/csinodeinfos.csi.storage.k8s.io created

Create RBAC rules for CSI provisioner

$ kubectl create -f https://raw.githubusercontent.com/kubernetes-csi/external-provisioner/1cd1c20a6d4b2fcd25c98a008385b436d61d46a4/deploy/kubernetes/rbac.yaml

clusterrole.rbac.authorization.k8s.io/external-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/csi-provisioner-role created
role.rbac.authorization.k8s.io/external-provisioner-cfg created
rolebinding.rbac.authorization.k8s.io/csi-provisioner-role-cfg created

Create RBAC rules for CSI attacher

$ kubectl create -f https://raw.githubusercontent.com/kubernetes-csi/external-attacher/9da8c6d20d58750ee33d61d0faf0946641f50770/deploy/kubernetes/rbac.yaml

serviceaccount/csi-attacher created
clusterrole.rbac.authorization.k8s.io/external-attacher-runner created
clusterrolebinding.rbac.authorization.k8s.io/csi-attacher-role created
role.rbac.authorization.k8s.io/external-attacher-cfg created
rolebinding.rbac.authorization.k8s.io/csi-attacher-role-cfg created

Create RBAC rules for node plugin

Only the driver-registrar interacts directly with Kubernetes, so it's those RBAC rules that are needed:

$ kubectl create -f https://raw.githubusercontent.com/kubernetes-csi/driver-registrar/87d0059110a8b4a90a6d2b5a8702dd7f3f270b80/deploy/kubernetes/rbac.yaml

serviceaccount/csi-driver-registrar created
clusterrole.rbac.authorization.k8s.io/driver-registrar-runner created
clusterrolebinding.rbac.authorization.k8s.io/csi-driver-registrar-role created

Create RBAC rules for CSI snapshotter

The CSI snapshotter is an optional sidecar container. You only need to create these RBAC rules if you want to test the snapshot feature.

$ kubectl create -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/01bd7f356e6718dee87914232d287631655bef1d/deploy/kubernetes/rbac.yaml

serviceaccount/csi-snapshotter created
clusterrole.rbac.authorization.k8s.io/external-snapshotter-runner created
clusterrolebinding.rbac.authorization.k8s.io/csi-snapshotter-role created

Deploy driver-registrar and hostpath CSI plugin in DaemonSet pod

The CSI sidecar apps are going to connect to the CSI driver, therefore starting it first helps avoid timeouts and intermittent container restarts:

$ kubectl create -f https://raw.githubusercontent.com/kubernetes/kubernetes/f40a5d1155aae95105a4e9bb8933d750c666e350/test/e2e/testing-manifests/storage-csi/hostpath/hostpath/csi-hostpathplugin.yaml daemonset.apps/csi-hostpathplugin created

$ kubectl get pod

NAME                       READY   STATUS    RESTARTS   AGE
csi-hostpathplugin-4k7hk   2/2     Running   0          22s

Deploy CSI provisioner in StatefulSet pod

$ kubectl create -f https://raw.githubusercontent.com/kubernetes/kubernetes/f40a5d1155aae95105a4e9bb8933d750c666e350/test/e2e/testing-manifests/storage-csi/hostpath/hostpath/csi-hostpath-provisioner.yaml

service/csi-hostpath-provisioner created
statefulset.apps/csi-hostpath-provisioner created

$ kubectl get pod

NAME                         READY   STATUS    RESTARTS   AGE
csi-hostpath-provisioner-0   1/1     Running   0          14s
csi-hostpathplugin-4k7hk     2/2     Running   0          75s

Deploy CSI attacher in StatefulSet pod

$ kubectl create -f https://raw.githubusercontent.com/kubernetes/kubernetes/f40a5d1155aae95105a4e9bb8933d750c666e350/test/e2e/testing-manifests/storage-csi/hostpath/hostpath/csi-hostpath-attacher.yaml

service/csi-hostpath-attacher created
statefulset.apps/csi-hostpath-attacher created

$ kubectl get pod

NAME                         READY   STATUS    RESTARTS   AGE
csi-hostpath-attacher-0      1/1     Running   0          14s
csi-hostpath-provisioner-0   1/1     Running   0          56s
csi-hostpathplugin-4k7hk     2/2     Running   0          117s

Deploy livenessprobe with CSI plugin

The CSI community provides a livenessprobe side-container that can be integrated with the CSI driver services (Node and Controller) to provide the liveness of the CSI service containers.

The livenessprobe side-container will expose the an http endpoint that will be used in a kubernetes liveness probe.

Below is an example configuration which needs to be added to CSI driver services (Node and Controller) yamls:

Note: This example is derived from using-livenessprobe from kubernetes-csi/livenessprobe

- name: hostpath-driver
    image: quay.io/k8scsi/hostpathplugin:vx.x.x
    imagePullPolicy: Always
    securityContext:
      privileged: true
#
# Defining port which will be used to GET plugin health status
# 9808 is default, but can be changed.
#
    ports:
    - containerPort: 9808
      name: healthz
      protocol: TCP
    livenessProbe:
      failureThreshold: 5
      httpGet:
        path: /healthz
        port: healthz
      initialDelaySeconds: 10
      timeoutSeconds: 3
      periodSeconds: 2
      failureThreshold: 1
...
#
# Spec for liveness probe sidecar container
# 
 - name: liveness-probe
    imagePullPolicy: Always
    volumeMounts:
    - mountPath: /csi
      name: socket-dir
    image: quay.io/k8scsi/livenessprobe:v0.4.1
    args:
    - --csi-address=/csi/csi.sock
    - --connection-timeout=3s
    - --health-port=9898
#

Where:

  • --csi-address - specifies the Unix domain socket path, as seen in the container, for the CSI driver. It allows the livenessprobe sidecar to communicate with the driver for driver liveness information. Mount path /csi is mapped to HostPath entry socket-dir which is mapped to directory /var/lib/kubelet/plugins/csi-hostpath

  • --connection-timeout - specifies the timeout duration of waiting for CSI driver socket in seconds. (default 30s)

  • --health-port - specifies the TCP ports for listening healthz requests (default "9808")

Deploy CSI snapshotter in StatefulSet pod

The CSI snapshotter is an optional sidecar container. You only need to deploy it if you want to test the snapshot feature.

$ kubectl create -f https://raw.githubusercontent.com/kubernetes-csi/docs/387dce893e59c1fcf3f4192cbea254440b6f0f07/book/src/example/snapshot/csi-hostpath-snapshotter.yaml

service/csi-hostpath-snapshotter created
statefulset.apps/csi-hostpath-snapshotter created

$ kubectl get pod

NAME                         READY   STATUS    RESTARTS   AGE
csi-hostpath-attacher-0      1/1     Running   0          58s
csi-hostpath-provisioner-0   1/1     Running   0          100s
csi-hostpath-snapshotter-0   1/1     Running   0          12s
csi-hostpathplugin-4k7hk     2/2     Running   0          2m41s

Usage

Dynamic provisioning is enabled by creating a csi-hostpath-sc storage class.

$ kubectl create -f https://raw.githubusercontent.com/kubernetes-csi/docs/387dce893e59c1fcf3f4192cbea254440b6f0f07/book/src/example/usage/csi-storageclass.yaml storageclass.storage.k8s.io/csi-hostpath-sc created

We can use this storage class to create and claim a new volume:

$ kubectl create -f https://raw.githubusercontent.com/kubernetes-csi/docs/387dce893e59c1fcf3f4192cbea254440b6f0f07/book/src/example/usage/csi-pvc.yaml persistentvolumeclaim/csi-pvc created

$ kubectl get pvc

NAME      STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
csi-pvc   Bound    pvc-0571cc14-c714-11e8-8911-000c2967769a   1Gi        RWO            csi-hostpath-sc   3s

$ kubectl get pv

NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM             STORAGECLASS      REASON   AGE
pvc-0571cc14-c714-11e8-8911-000c2967769a   1Gi        RWO            Delete           Bound    default/csi-pvc   csi-hostpath-sc            3s

The HostPath driver is configured to create new volumes under /tmp inside the hostpath container in the CSI hostpath plugin DaemonSet pod and thus persist as long as the DaemonSet pod itself. We can use such volumes in another pod like this:

$ kubectl create -f https://raw.githubusercontent.com/kubernetes-csi/docs/387dce893e59c1fcf3f4192cbea254440b6f0f07/book/src/example/usage/csi-app.yaml pod/my-csi-app created

$ kubectl get pods

NAME                         READY   STATUS    RESTARTS   AGE
csi-hostpath-attacher-0      1/1     Running   0          117s
csi-hostpath-provisioner-0   1/1     Running   0          2m39s
csi-hostpath-snapshotter-0   1/1     Running   0          71s
csi-hostpathplugin-4k7hk     2/2     Running   0          3m40s
my-csi-app                   1/1     Running   0          14s

$ kubectl describe pods/my-csi-app

Name:               my-csi-app
Namespace:          default
Priority:           0
PriorityClassName:  <none>
Node:               127.0.0.1/127.0.0.1
Start Time:         Wed, 03 Oct 2018 06:59:19 -0700
Labels:             <none>
Annotations:        <none>
Status:             Running
IP:                 172.17.0.5
Containers:
  my-frontend:
    Container ID:  docker://fd2950af39a155bdf08d1da341cfb23aa0d1af3eaaad6950a946355789606e8c
    Image:         busybox
    Image ID:      docker-pullable://busybox@sha256:2a03a6059f21e150ae84b0973863609494aad70f0a80eaeb64bddd8d92465812
    Port:          <none>
    Host Port:     <none>
    Command:
      sleep
      1000000
    State:          Running
      Started:      Wed, 03 Oct 2018 06:59:22 -0700
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /data from my-csi-volume (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-xms2g (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  my-csi-volume:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  csi-pvc
    ReadOnly:   false
  default-token-xms2g:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-xms2g
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason                  Age   From                     Message
  ----    ------                  ----  ----                     -------
  Normal  Scheduled               69s   default-scheduler        Successfully assigned default/my-csi-app to 127.0.0.1
  Normal  SuccessfulAttachVolume  69s   attachdetach-controller  AttachVolume.Attach succeeded for volume "pvc-0571cc14-c714-11e8-8911-000c2967769a"
  Normal  Pulling                 67s   kubelet, 127.0.0.1       pulling image "busybox"
  Normal  Pulled                  67s   kubelet, 127.0.0.1       Successfully pulled image "busybox"
  Normal  Created                 67s   kubelet, 127.0.0.1       Created container
  Normal  Started                 66s   kubelet, 127.0.0.1       Started container

Confirming the setup

Writing inside the app container should be visible in /tmp of the hostpath container:

$ kubectl exec -it my-csi-app /bin/sh
/ # touch /data/hello-world
/ # exit

$ kubectl exec -it $(kubectl get pods --selector app=csi-hostpathplugin -o jsonpath='{.items[*].metadata.name}') -c hostpath /bin/sh
/ # find / -name hello-world
/tmp/057485ab-c714-11e8-bb16-000c2967769a/hello-world
/ # exit

There should be a VolumeAttachment while the app has the volume mounted:

$ kubectl get VolumeAttachment

Name:         csi-a4e97f3af2161c6d081b8e96c58ed00c9bf1e1745e89b2545e24505437f015df
Namespace:
Labels:       <none>
Annotations:  <none>
API Version:  storage.k8s.io/v1beta1
Kind:         VolumeAttachment
Metadata:
  Creation Timestamp:  2018-10-03T13:59:19Z
  Resource Version:    1730
  Self Link:           /apis/storage.k8s.io/v1beta1/volumeattachments/csi-a4e97f3af2161c6d081b8e96c58ed00c9bf1e1745e89b2545e24505437f015df
  UID:                 862d7241-c714-11e8-8911-000c2967769a
Spec:
  Attacher:   csi-hostpath
  Node Name:  127.0.0.1
  Source:
    Persistent Volume Name:  pvc-0571cc14-c714-11e8-8911-000c2967769a
Status:
  Attached:  true
Events:      <none>

Snapshot support

Enable dynamic provisioning of volume snapshot by creating a volume snapshot class as follows:

$ kubectl create -f https://raw.githubusercontent.com/kubernetes-csi/docs/387dce893e59c1fcf3f4192cbea254440b6f0f07/book/src/example/snapshot/csi-snapshotclass.yaml volumesnapshotclass.snapshot.storage.k8s.io/csi-hostpath-snapclass created

$ kubectl get volumesnapshotclass

NAME                     AGE
csi-hostpath-snapclass   11s

$ kubectl describe volumesnapshotclass

Name:         csi-hostpath-snapclass
Namespace:
Labels:       <none>
Annotations:  <none>
API Version:  snapshot.storage.k8s.io/v1alpha1
Kind:         VolumeSnapshotClass
Metadata:
  Creation Timestamp:  2018-10-03T14:15:30Z
  Generation:          1
  Resource Version:    2418
  Self Link:           /apis/snapshot.storage.k8s.io/v1alpha1/volumesnapshotclasses/csi-hostpath-snapclass
  UID:                 c8f5bc47-c716-11e8-8911-000c2967769a
Snapshotter:           csi-hostpath
Events:                <none>

Use the volume snapshot class to dynamically create a volume snapshot:

$ kubectl create -f https://raw.githubusercontent.com/kubernetes-csi/docs/387dce893e59c1fcf3f4192cbea254440b6f0f07/book/src/example/snapshot/csi-snapshot.yaml volumesnapshot.snapshot.storage.k8s.io/new-snapshot-demo created

$ kubectl get volumesnapshot

NAME                AGE
new-snapshot-demo   12s

$ kubectl get volumesnapshotcontent

NAME                                               AGE
snapcontent-f55db632-c716-11e8-8911-000c2967769a   14s

$ kubectl describe volumesnapshot

Name:         new-snapshot-demo
Namespace:    default
Labels:       <none>
Annotations:  <none>
API Version:  snapshot.storage.k8s.io/v1alpha1
Kind:         VolumeSnapshot
Metadata:
  Creation Timestamp:  2018-10-03T14:16:45Z
  Generation:          1
  Resource Version:    2476
  Self Link:           /apis/snapshot.storage.k8s.io/v1alpha1/namespaces/default/volumesnapshots/new-snapshot-demo
  UID:                 f55db632-c716-11e8-8911-000c2967769a
Spec:
  Snapshot Class Name:    csi-hostpath-snapclass
  Snapshot Content Name:  snapcontent-f55db632-c716-11e8-8911-000c2967769a
  Source:
    Kind:  PersistentVolumeClaim
    Name:  csi-pvc
Status:
  Creation Time:  2018-10-03T14:16:45Z
  Ready:          true
  Restore Size:   1Gi
Events:           <none>

$ kubectl describe volumesnapshotcontent

Name:         snapcontent-f55db632-c716-11e8-8911-000c2967769a
Namespace:
Labels:       <none>
Annotations:  <none>
API Version:  snapshot.storage.k8s.io/v1alpha1
Kind:         VolumeSnapshotContent
Metadata:
  Creation Timestamp:  2018-10-03T14:16:45Z
  Generation:          1
  Resource Version:    2474
  Self Link:           /apis/snapshot.storage.k8s.io/v1alpha1/volumesnapshotcontents/snapcontent-f55db632-c716-11e8-8911-000c2967769a
  UID:                 f561411f-c716-11e8-8911-000c2967769a
Spec:
  Csi Volume Snapshot Source:
    Creation Time:    1538576205471577525
    Driver:           csi-hostpath
    Restore Size:     1073741824
    Snapshot Handle:  f55ff979-c716-11e8-bb16-000c2967769a
  Persistent Volume Ref:
    API Version:        v1
    Kind:               PersistentVolume
    Name:               pvc-0571cc14-c714-11e8-8911-000c2967769a
    Resource Version:   1573
    UID:                0575b966-c714-11e8-8911-000c2967769a
  Snapshot Class Name:  csi-hostpath-snapclass
  Volume Snapshot Ref:
    API Version:       snapshot.storage.k8s.io/v1alpha1
    Kind:              VolumeSnapshot
    Name:              new-snapshot-demo
    Namespace:         default
    Resource Version:  2472
    UID:               f55db632-c716-11e8-8911-000c2967769a
Events:                <none>

Restore volume from snapshot support

Follow the following example to create a volume from a volume snapshot:

$ kubectl create -f https://raw.githubusercontent.com/kubernetes-csi/docs/387dce893e59c1fcf3f4192cbea254440b6f0f07/book/src/example/snapshot/csi-restore.yaml persistentvolumeclaim/hpvc-restore created

$ kubectl get pvc

NAME           STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
csi-pvc        Bound    pvc-0571cc14-c714-11e8-8911-000c2967769a   1Gi        RWO            csi-hostpath-sc   24m
hpvc-restore   Bound    pvc-77324684-c717-11e8-8911-000c2967769a   1Gi        RWO            csi-hostpath-sc   6s

$ kubectl get pv

NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                  STORAGECLASS      REASON   AGE
pvc-0571cc14-c714-11e8-8911-000c2967769a   1Gi        RWO            Delete           Bound    default/csi-pvc        csi-hostpath-sc            25m
pvc-77324684-c717-11e8-8911-000c2967769a   1Gi        RWO            Delete           Bound    default/hpvc-restore   csi-hostpath-sc            33s

If you encounter any problems, please check the Troubleshooting page.

Testing

This section describes how CSI developers can test their CSI drivers.

Unit Testing

The CSI sanity package from csi-test can be used for unit testing your CSI driver.

It contains a set of basic tests that all CSI drivers should pass (for example, NodePublishVolume should fail when no volume id is provided, etc.).

This package can be used in two modes:

  • Via a Golang test framework (sanity package is imported as a dependency)
  • Via a command line against your driver binary.

Read the documentation at https://github.com/kubernetes-csi/csi-test/blob/master/pkg/sanity/README.md for more details.

Functional Testing

Some functional testing of your CSI driver can be done via the CLI mode of the CSI sanity package.

Drivers should also be functionally "end-to-end" tested while deployed in a Kubernetes cluster. Currently how to do this and what tests to run is left up to driver authors. In the future, a project (currently in development) aims to enable use of a pre-built kubernetes/e2e/e2e.test binary containing a standard set of Kubernetes CSI end-to-end tests to be imported and run by third party CSI drivers. This documentation will be updated with more information once that is ready to use.

The CSI community is also looking in to establishing an official "CSI Conformance Suite" to recognize "officially certified CSI drivers". This documentation will be updated with more information once that process has been defined.

Drivers

The following are a set of CSI driver which can be used with Kubernetes:

NOTE: If you would like your driver to be added to this table, please open a pull request in this repo updating this file.

Production Drivers

Name Status More Information
Alicloud Elastic Block Storage v1.0.0 A Container Storage Interface (CSI) Storage Plug-in for Alicloud Elastic Block Storage
Alicloud Elastic File System v1.0.0 A Container Storage Interface (CSI) Storage Plug-in for Alicloud Elastic File System
Alicloud OSS v1.0.0 A Container Storage Interface (CSI) Storage Plug-in for Alicloud OSS
AWS Elastic Block Storage v0.2.0 A Container Storage Interface (CSI) Driver for AWS Elastic Block Storage (EBS)
AWS Elastic File System v0.1.0 A Container Storage Interface (CSI) Driver for AWS Elastic File System (EFS)
AWS FSx for Lustre v0.1.0 A Container Storage Interface (CSI) Driver for AWS FSx for Lustre (EBS)
Azure disk v0.1.0 (alpha) A Container Storage Interface (CSI) Storage Plug-in for Azure disk
Azure file v0.1.0 (alpha) A Container Storage Interface (CSI) Storage Plug-in for Azure file
CephFSv0.2.0A Container Storage Interface (CSI) Storage Plug-in for CephFS
Cinderv0.2.0A Container Storage Interface (CSI) Storage Plug-in for Cinder
Daterav1.0.0A Container Storage Interface (CSI) Storage Plugin for Datera Data Services Platform (DSP)
DigitalOcean Block Storage v0.4.0 A Container Storage Interface (CSI) Driver for DigitalOcean Block Storage
DriveScalev1.0.0A Container Storage Interface (CSI) Storage Plug-in for DriveScale software composable infrastructure solution
Ember CSI v0.2.0 (alpha) Multi-vendor CSI plugin supporting over 80 storage drivers to provide block and mount storage to Container Orchestration systems.
GCE Persistent DiskBetaA Container Storage Interface (CSI) Storage Plugin for Google Compute Engine Persistent Disk (GCE PD)
Google Cloud FilestoreAlphaA Container Storage Interface (CSI) Storage Plugin for Google Cloud Filestore
GlusterFS v1.0.0 A Container Storage Interface (CSI) Plugin for GlusterFS
Linode Block Storage v0.0.3 A Container Storage Interface (CSI) Driver for Linode Block Storage
LINSTORv0.3.0A Container Storage Interface (CSI) Storage Plugin for LINSTOR
MapR v1.0.0 A Container Storage Interface (CSI) Storage Plugin for MapR Data Platform
MooseFSv0.0.1 (alpha)A Container Storage Interface (CSI) Storage Plugin for MooseFS clusters.
NetApp v0.2.0 (alpha) A Container Storage Interface (CSI) Storage Plug-in for NetApp's Trident container storage orchestrator
NexentaStor Beta A Container Storage Interface (CSI) Driver for NexentaStor
Nutanix beta A Container Storage Interface (CSI) Storage Driver for Nutanix
OpenSDS Beta For more information, please visit releases and https://github.com/opensds/nbp/tree/master/csi
Portworx 0.3.0 CSI implementation is available here which can be used as an example also.
Quobyte v0.2.0 A Container Storage Interface (CSI) Plugin for Quobyte
RBDv0.2.0A Container Storage Interface (CSI) Storage RBD Plug-in for Ceph
ScaleIOv0.1.0A Container Storage Interface (CSI) Storage Plugin for DellEMC ScaleIO
StorageOS v1.0.0 A Container Storage Interface (CSI) Plugin for StorageOS
XSKY Beta A Container Storage Interface (CSI) Driver for XSKY Distributed Block Storage (X-EBS)
Vault Alpha A Container Storage Interface (CSI) Plugin for HashiCorp Vault
vSpherev0.1.0A Container Storage Interface (CSI) Storage Plug-in for VMware vSphere
YanRongYun v1.0.0 A Container Storage Interface (CSI) Driver for YanRong YRCloudFile Storage

Sample Drivers

Name Status More Information
Flexvolume Sample
HostPath v0.2.0 Only use for a single node tests. See the Example page for Kubernetes-specific instructions.
In-memory Sample Mock Driver v0.3.0 The sample mock driver used for csi-sanity
NFS Sample
VFS Driver Released A CSI plugin that provides a virtual file system.

Troubleshooting

Known Issues

  • [minikube-3378]: Volume mount causes minikube VM to become corrupted

Common Errors

Node plugin pod does not start with RunContainerError status

kubectl describe pod your-nodeplugin-pod shows:

failed to start container "your-driver": Error response from daemon:
linux mounts: Path /var/lib/kubelet/pods is mounted on / but it is not a shared mount

Your Docker host is not configured to allow shared mounts. Take a look at this page for instructions to enable them.

External attacher can't find VolumeAttachments

If you have a Kubernetes 1.9 cluster, not being able to list VolumeAttachment and the following error are due to the lack of the storage.k8s.io/v1alpha1=true runtime configuration:

$ kubectl logs csi-pod external-attacher
...
I0306 16:34:50.976069       1 reflector.go:240] Listing and watching *v1alpha1.VolumeAttachment from github.com/kubernetes-csi/external-attacher/vendor/k8s.io/client-go/informers/factory.go:86

E0306 16:34:50.992034       1 reflector.go:205] github.com/kubernetes-csi/external-attacher/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1alpha1.VolumeAttachment: the server could not find the requested resource
...

Please see the Kubernetes 1.9 page.

Problems with the external components

The external components images are under active development. It can happen that they become incompatible with each other. If the issues above above have been ruled out, contact the sig-storage team and/or run the e2e test:

go run hack/e2e.go -- --provider=local --test --test_args="--ginkgo.focus=Feature:CSI"