Introduction

Kubernetes CSI Documentation

Welcome to the CSI for Kubernetes documentation repository. Here you will find information on how to use, develop, and deploy CSI plugins, or drivers, with Kubernetes.

Project status

Kubernetes CSI spec Status
v1.9 v0.1 Alpha
v1.10 v0.2 Beta
v1.11 v0.3 Beta

Sidecar container status

Container Name CSI spec Latest Release Tag
csi-provisioner v0.3 v0.3.1
csi-attacher v0.3 v0.3.0
driver-registrar v0.3 v0.3.0

Installation

Please see the Setup page for instructions on how to setup Kubernetes support with CSI.

Using CSI Drivers

Before you can start using CSI, you must understand how to properly setup and configure deploy CSI drivers on top of Kubernetes. This section provides information on

  • Setup - Information on how to setup the CSI feature
  • Deployment - Instructions on deploying a driver
  • Drivers - A growning list of available CSI driverse you can use
  • Usage - Findout the usage mode of CSI drivers
  • Example - Using Hostpath as an example CSI driver

Setup

This document has been updated for the latest version of Kubernetes v1.12. This document outlines the features that are available for CSI. To get step by step instructions on how to run an example CSI driver, you can read the Example section.

Enabling features

Some of the features discussed here may be at different stages (alpha, beta, or GA). Ensure that the feature you want to try is enabled for the Kubernetes release you are using. To avoid version mismatch, you can enable all of the features discussed here with:

--feature-dates=VolumeSnapshotDataSource=true,KubeletPluginsWatcher=true,CSINodeInfo=true,CSIDriverRegistry=true

Enable privileged Pods

To use CSI drivers, your Kubernetes cluster must allow privileged pods (i.e. --allow-privileged flag must be set to true for both the API server and the kubelet). This is the default in some environments (e.g. GCE, GKE, kubeadm).

Ensure your API server are started with the privileged flag:

$ ./kube-apiserver ...  --allow-privileged=true ...
$ ./kubelet ...  --allow-privileged=true ...

Enabling mount propagation

Another feature that CSI depends on is mount propagation. It allows the sharing of volumes mounted by one container with other containers in the same pod, or even to other pods on the same node. For mount propagation to work, the Docker daemon for the cluster must allow shared mounts. See the mount propagation docs to find out how to enable this feature for your cluster. This page explains how to check if shared mounts are enabled and how to configure Docker for shared mounts.

Enable raw block volume support (alpha)

Kubernetes now has raw block volume Support as an alpha implementation. If you want to use the CSI raw block volume support, you must enable the feature (for your Kubernetes binaries including server, kubelet, controller manager, etc) with the feature-gates flag as follow:

$ kube<binary> --feature-gates=BlockVolume=true,CSIBlockVolume=true ...

CSIDriver custom resource (alpha)

Starting with version 1.12, the CSIDriver custom resource definition (or CRD) has been introduced as a way to represent the CSI drivers running in a cluster. An admin can update the attributes of this object to modify the configuration of its associated driver at runtime.

You can see the full definition of this CRD here.

The alpha release of CSIDriver exposes three main configuration settings:

apiVersion: v1
items:
- apiVersion: csi.storage.k8s.io/v1alpha1
  kind: CSIDriver
  metadata:
    name: csi-hostpath
  spec:
    attachRequired: true
    podInfoOnMountVersion: "v1"

Where:

  • metadata:name - the identifying name of the CSI driver. That name must be unique in the cluster as it is the name that is used to identify the CSI cluster.
  • attachRequired - indicates that the CSI volume driver requires a volume attach operation. This will cause Kubernetes to call make a CSI.ControllerPublishVolume() call and wait for completion before proceeding to mount.
  • podInfoOnMountVersion - this value indicates that the associated CSI volume driver requires additional pod information (like podName, podUID, etc.) during mount. Leave value empty if you do not want pod info to be transmitted. Or, provide a value of v1 which will cause the Kubelet to send the followings pod information during NodePublishVolume() calls to the driver as VolumeAttributes:
csi.storage.k8s.io/pod.name: pod.Name
csi.storage.k8s.io/pod.namespace: pod.Namespace
csi.storage.k8s.io/pod.uid: string(pod.UID)

Enabling CSIDriver

If you want to use the CSIDriver CRD and get a preview of how configuration will work at runtime, do the followings:

  1. Ensure the feature gate is enabled with --feature-gates=CSIDriverRegistry=true
  2. Install the CSIDriver CRD on the Kubernetes cluster with the following command:
$> kubectl create -f https://raw.githubusercontent.com/kubernetes/csi-api/master/pkg/crd/testdata/csidriver.yaml

Listing registered CSI drivers

Using theCSIDriver CRD, it is now possible to query Kubernetes to get a list of registered drivers running in the cluster as shown below:

$> kubectl get csidrivers.csi.storage.k8s.io
NAME           AGE
csi-hostpath   2m

Or get a more detail view of your registered driver with:

$> kubectl describe csidrivers.csi.storage.k8s.io
Name:         csi-hostpath
Namespace:
Labels:       <none>
Annotations:  <none>
API Version:  csi.storage.k8s.io/v1alpha1
Kind:         CSIDriver
Metadata:
  Creation Timestamp:  2018-10-04T21:15:30Z
  Generation:          1
  Resource Version:    390
  Self Link:           /apis/csi.storage.k8s.io/v1alpha1/csidrivers/csi-hostpath
  UID:                 9f854aa6-c81a-11e8-bdce-000c29e88ff1
Spec:
  Attach Required:            true
  Pod Info On Mount Version:
Events:                       <none>

CSINodeInfo custom resource (alpha)

Object CSINodeInfo is a resource designed to carry binding information between a CSI driver and a cluster node where its volume storage will land. In the first release, object CSINodeInfo is used to establish the link between a node, its driver, and the topology keys used for scheduling volume storage.

You can see the full definition of this CRD here.

The following snippet shows a sample CSIDriverInfo which is usually created by Kubernetes:

apiVersion: v1
items:
- apiVersion: csi.storage.k8s.io/v1alpha1
  kind: CSINodeInfo
  metadata:
    name: 127.0.0.1
  csiDrivers:
  - driver: csi-hostpath
    nodeID: 127.0.0.1
    topologyKeys: []
...    

Where:

  • csiDrivers - list of CSI drivers running on the node and their properties.
  • driver - the CSI driver that this object refers to.
  • nodeId - the assigned identifier for the node as determined by the driver.
  • toplogykeys - A list of topology keys assigned to the node as supported by the driver.

Enabling CSINodeInfo

If you want to use the CSINodeInfo CRD and get a preview of how configuration will work at runtime, do the followings:

  1. Ensure the feature gate is enabled with --feature-gates=CSIDriverRegistry=true
  2. Install the CSINodeInfo CRD on the Kubernetes cluster with the following command:
$> kubectl create -f https://raw.githubusercontent.com/kubernetes/csi-api/master/pkg/crd/testdata/csinodeinfo.yaml

CSI driver discovery (beta)

The CSI driver discovery uses the Kubelet Plugin Watcher feature which allows Kubelet to discover deployed CSI drivers automatically. The registrar sidecar container exposes an internal registration server via a Unix data socket path. The Kubelet monitors its registration directory to detect new registration requests. Once detected, the Kubelet contacts the registrar sidecar to query driver information. The retrieved CSI driver information (including the driver's own socket path) will be used for further interaction with the driver.

This replaces the previous driver registration mechanism, where the driver-registrar sidecar, rather than kubelet, handles registration.

Using this discovery feature, instead of the prior registration mechanism, will not have any effect on how drivers behave, however, this will be the way CSI works internally in coming releases.

Registrar sidecar configuration

The registrar sidecar container provides configuration functionalities to its associated driver. For instance, using the registrar container, an admin can specify how the driver should behave during volume attachment operations. Some CLI arguments provided to the registrar container will be used to create the CSIDriver and CSIDriverInfo custom resources discussed earlier.

To configure your driver using the registrar sidecar, you can configure the container as shown in the snippet below:

- name: driver-registrar
    args:
    - --v=5
    - --csi-address=/csi/csi.sock
    - --mode=node-register
    - --driver-requires-attachment=true
    - --pod-info-mount-version="v1"
    - --kubelet-registration-path=/var/lib/kubelet/plugins/csi-hostpath/csi.sock
    env:
    - name: KUBE_NODE_NAME
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: spec.nodeName
    image: quay.io/k8scsi/driver-registrar:v0.2.0
    imagePullPolicy: Always
    volumeMounts:
    - mountPath: /csi
      name: socket-dir
    volumeMounts:
    - name: registration-dir
      mountPath: /registration
...
volumes:
  name: socket-dir
  - hostPath:
      path: /var/lib/kubelet/plugins/csi-hostpath
      type: DirectoryOrCreate
  name: registration-dir
  - hostPath:
      path: /var/lib/kubelet/plugins
      type: Directory

Where:

  • --csi-address - specifies the Unix domain socket path, on the host, for the CSI driver. It allows the registrar sidecar to communicate with the driver for discovery information. Mount path /csi is mapped to HostPath entry socket-dir which is mapped to directory /var/lib/kubelet/plugins/csi-hostpath

  • --mode - this flag specifies how the registar binary will function in either

  • --driver-requires-attachment - indicates that this CSI volume driver requires an attach operation (because it implements the CSI ControllerPublishVolume() method), and that Kubernetes should call attach and wait for any attach operation to complete before proceeding to mounting. If value is not specified, default is false meaning attach will not be called.

  • --pod-info-mount-version="v1" - this indicates that the associated CSI volume driver requires additional pod information (like podName, podUID, etc.) during mount. A version of value "v1" will cause the Kubelet send the followings pod information during NodePublishVolume() calls to the driver as VolumeAttributes:

  csi.storage.k8s.io/pod.name: pod.Name
  csi.storage.k8s.io/pod.namespace: pod.Namespace
  csi.storage.k8s.io/pod.uid: string(pod.UID)
  • --kubelet-registration-path - specifies the fully-qualified path of the Unix domain socket for the CSI driver on the host. This path is constructed using the path from HostPath socket-dir and the additional suffix csi.sock. The registrar sidecar will provide this path to core CSI components for subsequent volume operations.

  • VolumeMount /csi - is mapped to HostPath /var/lib/kubelet/plugins/csi-hostpath. It is the root location where the CSI driver's Unix Domain socket file is mounted on the host.

  • VolumeMount /registration is mapped to HostPath /var/lib/kubelet/plugins. It is the root location where Kubelet watcher scans for new plugin registration.

The Kubelet root directory

In the configuration above, notice that all paths starts with /var/lib/kubelet/plugin That is because the discovery mechanism relies on the Kubelet's root directory (which is by default) /var/lib/kubelet. Ensure that this path value matches the value specified in the Kubelet's --root-dir argument.

CSI Volume Snapshot support

CSI volume snapshot support: To enable support for Kubernetes volume snapshotting, you must set the following feature gate on Kubernetes v1.12 (disabled by default for alpha):

--feature-gates=VolumeSnapshotDataSource=true

Archives

Please visit the Archives for setup instructions on previous versions of Kubernetes.

Deployment

To benefit from the new CSI support, you will need to deploy a CSI driver. Please visit the Drivers page to determine how to deploy your specific driver. A functional example based on the HostPath driver is presented in the Example section.

Drivers

The following are a set of CSI driver which can be used with Kubernetes:

NOTE: If you would like your driver to be added to this table, please create an issue in this repo with the information you would like to add here.

Sample Drivers

Name Status More Information
Flexvolume Sample
HostPath v0.2.0 Only use for a single node tests. See the Example page for Kubernetes-specific instructions.
In-memory Sample Mock Driver v0.3.0 The sample mock driver used for csi-sanity
NFS Sample
VFS Driver Released A CSI plugin that provides a virtual file system.

Production Drivers

Name Status More Information
Cinderv0.2.0A Container Storage Interface (CSI) Storage Plug-in for Cinder
DigitalOcean Block Storage v0.0.1 (alpha) A Container Storage Interface (CSI) Driver for DigitalOcean Block Storage
AWS Elastic Block Storage v0.0.1(alpha) A Container Storage Interface (CSI) Driver for AWS Elastic Block Storage (EBS)
GCE Persistent DiskAlphaA Container Storage Interface (CSI) Storage Plugin for Google Compute Engine Persistent Disk
OpenSDS Beta For more information, please visit releases and https://github.com/opensds/nbp/tree/master/csi
Portworx 0.2.0 CSI implementation is available here which can be used as an example also.
RBDv0.2.0A Container Storage Interface (CSI) Storage RBD Plug-in for Ceph
CephFSv0.2.0A Container Storage Interface (CSI) Storage Plug-in for CephFS
ScaleIOv0.1.0A Container Storage Interface (CSI) Storage Plugin for DellEMC ScaleIO
vSpherev0.1.0A Container Storage Interface (CSI) Storage Plug-in for VMware vSphere
NetApp v0.2.0 (alpha) A Container Storage Interface (CSI) Storage Plug-in for NetApp's Trident container storage orchestrator
Ember CSI v0.2.0 (alpha) Multi-vendor CSI plugin supporting over 80 storage drivers to provide block and mount storage to Container Orchestration systems.
Nutanix beta A Container Storage Interface (CSI) Storage Driver for Nutanix
Quobyte v0.2.0 A Container Storage Interface (CSI) Plugin for Quobyte

Testing

There are multiple ways to test your driver. Please see Testing Drivers for more information.

Usage

There are two main models of how to use storage in Kubernetes with CSI drivers. These models include either the usage of pre-provisioned volumes or dynamic provisioned volumes. Please check the documentation of your specific driver for more information.

Pre-provisioned volumes

Pre-provisioned drivers work just as they did before, where the administrator would create a PersistentVolume specification which would describe the volume to be used. The PersistentVolume specification would need to be setup according to your driver, the difference here is that there is a new section called csi which needs to be setup accordingly. Please see Kubernetes Documentation on CSI Volumes.

Here is an example of a PersistentVolume specification of a pre-provisioned volume managed by a CSI driver:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: manually-created-pv
spec:
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  csi:
    driver: com.example.team/csi-driver
    volumeHandle: existingVolumeName
    readOnly: false

Dynamic Provisioning

To setup the system for dynamic provisioning, the administrator needs to setup a StorageClass pointing to the CSI driver’s external-provisioner and specifying any parameters required by the driver. Here is an example of a StorageClass:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: fast-storage
provisioner: com.example.team/csi-driver
parameters:
  type: pd-ssd

Where,

  • provisioner: Must be set to the name of the CSI driver
  • parameters: Must contain any parameters specific to the CSI driver.

The user can then create a PersistentVolumeClaim utilizing this StorageClass as follows:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: request-for-storage
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
  storageClassName: fast-storage

CSI Provisioner Parameters

The CSI dynamic provisioner makes CreateVolumeRequest and DeleteVolumeRequest calls to CSI drivers. The controllerCreateSecrets and controllerDeleteSecrets fields in those requests can be populated with data from a Kubernetes Secret object by setting csiProvisionerSecretName and csiProvisionerSecretNamespace parameters in the StorageClass. For example:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: fast-storage
provisioner: com.example.team/csi-driver
parameters:
  type: pd-ssd
  csiProvisionerSecretName: fast-storage-provision-key
  csiProvisionerSecretNamespace: pd-ssd-credentials

The csiProvisionerSecretName and csiProvisionerSecretNamespace parameters may specify literal values, or a template containing the following variables:

  • ${pv.name} - replaced with the name of the PersistentVolume object being provisioned

Once the CSI volume is created, a corresponding Kubernetes PersistentVolume object is created. The controllerPublishSecretRef, nodeStageSecretRef, and nodePublishSecretRef fields in the PersistentVolume object can be populated via the following storage class parameters:

  • controllerPublishSecretRef in the PersistentVolume is populated by setting these StorageClass parameters:
    • csiControllerPublishSecretName
    • csiControllerPublishSecretNamespace
  • nodeStageSecretRef in the PersistentVolume is populated by setting these StorageClass parameters:
    • csiNodeStageSecretName
    • csiNodeStageSecretNamespace
  • nodePublishSecretRef in the PersistentVolume is populated by setting these StorageClass parameters:
    • csiNodePublishSecretName
    • csiNodePublishSecretNamespace

The csiControllerPublishSecretName, csiNodeStageSecretName, and csiNodePublishSecretName parameters may specify a literal secret name, or a template containing the following variables:

  • ${pv.name} - replaced with the name of the PersistentVolume
  • ${pvc.name} - replaced with the name of the PersistentVolumeClaim
  • ${pvc.namespace} - replaced with the namespace of the PersistentVolumeClaim
  • ${pvc.annotations['<ANNOTATION_KEY>']} (e.g. ${pvc.annotations['example.com/key']}) - replaced with the value of the specified annotation in the PersistentVolumeClaim

The csiControllerPublishSecretNamespace, csiNodeStageSecretNamespace, and csiNodePublishSecretNamespace parameters may specify a literal namespace name, or a template containing the following variables:

  • ${pv.name} - replaced with the name of the PersistentVolume
  • ${pvc.namespace} - replaced with the namespace of the PersistentVolumeClaim

As an example, consider this StorageClass:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: fast-storage
provisioner: com.example.team/csi-driver
parameters:
  type: pd-ssd

  csiProvisionerSecretName: fast-storage-provision-key
  csiProvisionerSecretNamespace: pd-ssd-credentials

  csiControllerPublishSecretName: ${pv.name}-publish
  csiControllerPublishSecretNamespace: pd-ssd-credentials

  csiNodeStageSecretName: ${pv.name}-stage
  csiNodeStageSecretNamespace: pd-ssd-credentials

  csiNodePublishSecretName: ${pvc.annotations['com.example.team/key']}
  csiNodePublishSecretNamespace: ${pvc.namespace}

This StorageClass instructs the CSI provisioner to do the following:

  • send the data in the fast-storage-provision-key secret in the pd-ssd-credentials namespace as part of the create request to the CSI driver
  • create a PersistentVolume with:
    • a per-volume controller publish and node stage secret, both in the pd-ssd-credentials (those secrets would need to be created separately in response to the PersistentVolume creation before the PersistentVolume could be attached/mounted)
    • a node publish secret in the same namespace as the PersistentVolumeClaim that triggered the provisioning, with a name specified as an annotation on the PersistentVolumeClaim. This could be used to give the creator of the PersistentVolumeClaim the ability to specify a secret containing a decryption key they have control over.

Example

The HostPath can be used to provision local storage in a single node test. This section shows how to deploy and use that driver in Kubernetes.

Deployment

This is tested with Kubernetes v1.12. Set the following feature gate flags to true:

--feature-dates=CSIPersistentVolume=true,MountPropagation=true,VolumeSnapshotDataSource=true,KubeletPluginsWatcher=true,CSINodeInfo=true,CSIDriverRegistry=true

CSIPersistentVolume is enabled by default in v1.10. MountPropagation is enabled by default in v1.10. VolumeSnapshotDataSource is a new alpha feature in v1.12. KubeletPluginsWatcher is enabled by default in v1.12. CSINodeInfo and CSIDriverRegistry are new alpha features in v1.12.

CRDs needs to be created manually for CSIDriverRegistry and CSINodeInfo.

$ kubectl create -f https://raw.githubusercontent.com/kubernetes/csi-api/master/pkg/crd/testdata/csidriver.yaml --validate=false
customresourcedefinition.apiextensions.k8s.io/csidrivers.csi.storage.k8s.io created

$ kubectl create -f https://raw.githubusercontent.com/kubernetes/csi-api/master/pkg/crd/testdata/csinodeinfo.yaml --validate=false
customresourcedefinition.apiextensions.k8s.io/csinodeinfos.csi.storage.k8s.io created

Create RBAC rules for CSI provisioner

$ kubectl create -f https://raw.githubusercontent.com/kubernetes-csi/docs/master/book/src/example/rbac/csi-provisioner-rbac.yaml
serviceaccount/csi-provisioner created
clusterrole.rbac.authorization.k8s.io/external-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/csi-provisioner-role created
apiVersion: v1
kind: ServiceAccount
metadata:
  name: csi-provisioner

---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: external-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["secrets"]
    verbs: ["get", "list"]
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["list", "watch", "create", "update", "patch"]
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["list", "watch", "create", "update", "get"]
  - apiGroups: ["snapshot.storage.k8s.io"]
    resources: ["volumesnapshots"]
    verbs: ["get", "list"]
  - apiGroups: ["snapshot.storage.k8s.io"]
    resources: ["volumesnapshotcontents"]
    verbs: ["get", "list"]
    
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: csi-provisioner-role
subjects:
  - kind: ServiceAccount
    name: csi-provisioner
    namespace: default
roleRef:
  kind: ClusterRole
  name: external-provisioner-runner
  apiGroup: rbac.authorization.k8s.io

Create RBAC rules for CSI attacher

$ kubectl create -f https://raw.githubusercontent.com/kubernetes-csi/docs/master/book/src/example/rbac/csi-attacher-rbac.yaml
serviceaccount/csi-attacher created
clusterrole.rbac.authorization.k8s.io/external-attacher-runner created
clusterrolebinding.rbac.authorization.k8s.io/csi-attacher-role created
apiVersion: v1
kind: ServiceAccount
metadata:
  name: csi-attacher

---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: external-attacher-runner
rules:
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get", "list", "watch"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["volumeattachments"]
    verbs: ["get", "list", "watch", "update"]

---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: csi-attacher-role
subjects:
  - kind: ServiceAccount
    name: csi-attacher
    namespace: default
roleRef:
  kind: ClusterRole
  name: external-attacher-runner
  apiGroup: rbac.authorization.k8s.io

Create RBAC rules for node plugin

$ kubectl create -f https://raw.githubusercontent.com/kubernetes-csi/docs/master/book/src/example/rbac/csi-nodeplugin-rbac.yaml
serviceaccount/csi-nodeplugin created
clusterrole.rbac.authorization.k8s.io/csi-nodeplugin created
clusterrolebinding.rbac.authorization.k8s.io/csi-nodeplugin created
apiVersion: v1
kind: ServiceAccount
metadata:
  name: csi-nodeplugin

---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: csi-nodeplugin
rules:
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get", "list", "update"]
  - apiGroups: [""]
    resources: ["namespaces"]
    verbs: ["get", "list"]
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["volumeattachments"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["csi.storage.k8s.io"]
    resources: ["csidrivers"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["csi.storage.k8s.io"]
    resources: ["csinodeinfos"]
    verbs: ["get", "list", "watch", "update"]

---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: csi-nodeplugin
subjects:
  - kind: ServiceAccount
    name: csi-nodeplugin
    namespace: default
roleRef:
  kind: ClusterRole
  name: csi-nodeplugin
  apiGroup: rbac.authorization.k8s.io

Create RBAC rules for CSI snapshotter

The CSI snapshotter is an optional sidecar container. You only need to create these RBAC rules if you want to test the snapshot feature.

$ kubectl create -f https://raw.githubusercontent.com/kubernetes-csi/docs/master/book/src/example/snapshot/csi-snapshotter-rbac.yaml
serviceaccount/csi-snapshotter created
clusterrole.rbac.authorization.k8s.io/external-snapshotter-runner created
clusterrolebinding.rbac.authorization.k8s.io/csi-snapshotter-role created
apiVersion: v1
kind: ServiceAccount
metadata:
  name: csi-snapshotter
 
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: external-snapshotter-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["list", "watch", "create", "update", "patch"]
  - apiGroups: [""]
    resources: ["secrets"]
    verbs: ["get", "list"]
  - apiGroups: ["snapshot.storage.k8s.io"]
    resources: ["volumesnapshotclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: ["snapshot.storage.k8s.io"]
    resources: ["volumesnapshotcontents"]
    verbs: ["create", "get", "list", "watch", "update", "delete"]
  - apiGroups: ["snapshot.storage.k8s.io"]
    resources: ["volumesnapshots"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["apiextensions.k8s.io"]
    resources: ["customresourcedefinitions"]
    verbs: ["create", "list", "watch", "delete"]
 
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: csi-snapshotter-role
subjects:
  - kind: ServiceAccount
    name: csi-snapshotter
    namespace: default
roleRef:
  kind: ClusterRole
  name: external-snapshotter-runner
  apiGroup: rbac.authorization.k8s.io

Deploy CSI provisioner in StatefulSet pod

$ kubectl create -f https://raw.githubusercontent.com/kubernetes-csi/docs/master/book/src/example/hostpath/csi-hostpath-provisioner.yaml
service/csi-hostpath-provisioner created
statefulset.apps/csi-hostpath-provisioner created

$ kubectl get pod
NAME                         READY   STATUS    RESTARTS   AGE
csi-hostpath-provisioner-0   1/1     Running   0          6s
kind: Service
apiVersion: v1
metadata:
  name: csi-hostpath-provisioner 
  labels:
    app: csi-hostpath-provisioner 
spec:
  selector:
    app: csi-hostpath-provisioner 
  ports:
    - name: dummy
      port: 12345

---
kind: StatefulSet
apiVersion: apps/v1
metadata:
  name: csi-hostpath-provisioner
spec:
  serviceName: "csi-hostpath-provisioner"
  replicas: 1
  selector:
    matchLabels:
      app: csi-hostpath-provisioner
  template:
    metadata:
      labels:
        app: csi-hostpath-provisioner
    spec:
      serviceAccount: csi-provisioner
      containers:
        - name: csi-provisioner
          image: quay.io/k8scsi/csi-provisioner:v0.4.0
          args:
            - "--provisioner=csi-hostpath"
            - "--csi-address=$(ADDRESS)"
            - "--connection-timeout=15s"
          env:
            - name: ADDRESS
              value: /csi/csi.sock
          imagePullPolicy: Always
          volumeMounts:
            - mountPath: /csi
              name: socket-dir
      volumes:
        - hostPath:
            path: /var/lib/kubelet/plugins/csi-hostpath
            type: DirectoryOrCreate
          name: socket-dir

Deploy CSI attacher in StatefulSet pod

$ kubectl create -f https://raw.githubusercontent.com/kubernetes-csi/docs/master/book/src/example/hostpath/csi-hostpath-attacher.yaml
service/csi-hostpath-attacher created
statefulset.apps/csi-hostpath-attacher created

$ kubectl get pod
NAME                         READY   STATUS    RESTARTS   AGE
csi-hostpath-attacher-0      1/1     Running   0          4s
csi-hostpath-provisioner-0   1/1     Running   0          2m1s
kind: Service
apiVersion: v1
metadata:
  name: csi-hostpath-attacher
  labels:
    app: csi-hostpath-attacher
spec:
  selector:
    app: csi-hostpath-attacher
  ports:
    - name: dummy
      port: 12345

---
kind: StatefulSet
apiVersion: apps/v1
metadata:
  name: csi-hostpath-attacher
spec:
  serviceName: "csi-hostpath-attacher"
  replicas: 1
  selector:
    matchLabels:
      app: csi-hostpath-attacher
  template:
    metadata:
      labels:
        app: csi-hostpath-attacher
    spec:
      serviceAccount: csi-attacher
      containers:
        - name: csi-attacher
          image: quay.io/k8scsi/csi-attacher:v0.4.0
          args:
            - --v=5
            - --csi-address=$(ADDRESS)
          env:
            - name: ADDRESS
              value: /csi/csi.sock
          imagePullPolicy: Always
          volumeMounts:
          - mountPath: /csi
            name: socket-dir
      volumes:
        - hostPath:
            path: /var/lib/kubelet/plugins/csi-hostpath
            type: DirectoryOrCreate
          name: socket-dir

Deploy driver-registrar and hostpath CSI plugin in DaemonSet pod

$ kubectl create -f https://raw.githubusercontent.com/kubernetes-csi/docs/master/book/src/example/hostpath/csi-hostpathplugin.yaml
daemonset.apps/csi-hostpathplugin created

$ kubectl get pod
NAME                         READY   STATUS    RESTARTS   AGE
csi-hostpath-attacher-0      1/1     Running   0          53s
csi-hostpath-provisioner-0   1/1     Running   0          2m50s
csi-hostpathplugin-9rp7c     2/2     Running   0          5s
kind: DaemonSet
apiVersion: apps/v1
metadata:
  name: csi-hostpathplugin
spec:
  selector:
    matchLabels:
      app: csi-hostpathplugin
  template:
    metadata:
      labels:
        app: csi-hostpathplugin
    spec:
      serviceAccount: csi-nodeplugin
      hostNetwork: true
      containers:
        - name: driver-registrar
          image: quay.io/k8scsi/driver-registrar:v0.4.0
          args:
            - --v=5
            - --csi-address=/csi/csi.sock
            - --kubelet-registration-path=/var/lib/kubelet/plugins/csi-hostpath/csi.sock
          env:
            - name: KUBE_NODE_NAME
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: spec.nodeName
          imagePullPolicy: Always
          volumeMounts:
          - mountPath: /csi
            name: socket-dir
          - mountPath: /registration
            name: registration-dir
        - name: hostpath
          image: quay.io/k8scsi/hostpathplugin:v0.4.0
          args:
            - "--v=5"
            - "--endpoint=$(CSI_ENDPOINT)"
            - "--nodeid=$(KUBE_NODE_NAME)"
          env:
            - name: CSI_ENDPOINT
              value: unix:///csi/csi.sock
            - name: KUBE_NODE_NAME
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: spec.nodeName
          imagePullPolicy: Always
          securityContext:
            privileged: true
          volumeMounts:
            - mountPath: /csi
              name: socket-dir
            - mountPath: /var/lib/kubelet/pods
              mountPropagation: Bidirectional
              name: mountpoint-dir
      volumes:
        - hostPath:
            path: /var/lib/kubelet/plugins/csi-hostpath
            type: DirectoryOrCreate
          name: socket-dir
        - hostPath:
            path: /var/lib/kubelet/pods
            type: DirectoryOrCreate
          name: mountpoint-dir
        - hostPath:
            path: /var/lib/kubelet/plugins
            type: Directory
          name: registration-dir

Deploy CSI snapshotter in StatefulSet pod

The CSI snapshotter is an optional sidecar container. You only need to deploy it if you want to test the snapshot feature.

$ kubectl create -f https://raw.githubusercontent.com/kubernetes-csi/docs/master/book/src/example/snapshot/csi-hostpath-snapshotter.yaml
service/csi-hostpath-snapshotter created
statefulset.apps/csi-hostpath-snapshotter created

$ kubectl get pod
NAME                         READY   STATUS    RESTARTS   AGE
csi-hostpath-attacher-0      1/1     Running   0          96s
csi-hostpath-provisioner-0   1/1     Running   0          3m33s
csi-hostpath-snapshotter-0   1/1     Running   0          5s
csi-hostpathplugin-9rp7c     2/2     Running   0          48s
kind: Service
apiVersion: v1
metadata:
  name: csi-hostpath-snapshotter 
  labels:
    app: csi-hostpath-snapshotter 
spec:
  selector:
    app: csi-hostpath-snapshotter 
  ports:
    - name: dummy
      port: 12345

---
kind: StatefulSet
apiVersion: apps/v1
metadata:
  name: csi-hostpath-snapshotter
spec:
  serviceName: "csi-hostpath-snapshotter"
  replicas: 1
  selector:
    matchLabels:
      app: csi-hostpath-snapshotter
  template:
    metadata:
      labels:
        app: csi-hostpath-snapshotter
    spec:
      serviceAccount: csi-snapshotter
      containers:
        - name: csi-snapshotter
          image: quay.io/k8scsi/csi-snapshotter:v0.4.0
          args:
            - "--csi-address=$(ADDRESS)"
            - "--connection-timeout=15s"
          env:
            - name: ADDRESS
              value: /csi/csi.sock
          imagePullPolicy: Always
          volumeMounts:
            - mountPath: /csi
              name: socket-dir
      volumes:
        - hostPath:
            path: /var/lib/kubelet/plugins/csi-hostpath
            type: DirectoryOrCreate
          name: socket-dir

Usage

Dynamic provisioning is enabled by creating a csi-hostpath-sc storage class.

$ kubectl create -f https://raw.githubusercontent.com/kubernetes-csi/docs/master/book/src/example/usage/csi-storageclass.yaml
storageclass.storage.k8s.io/csi-hostpath-sc created
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: csi-hostpath-sc
provisioner: csi-hostpath
reclaimPolicy: Delete
volumeBindingMode: Immediate

We can use this storage class to create and claim a new volume:

$ kubectl create -f https://raw.githubusercontent.com/kubernetes-csi/docs/master/book/src/example/usage/csi-pvc.yaml
persistentvolumeclaim/csi-pvc created
$ kubectl get pvc
NAME      STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
csi-pvc   Bound    pvc-0571cc14-c714-11e8-8911-000c2967769a   1Gi        RWO            csi-hostpath-sc   3s

$ kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM             STORAGECLASS      REASON   AGE
pvc-0571cc14-c714-11e8-8911-000c2967769a   1Gi        RWO            Delete           Bound    default/csi-pvc   csi-hostpath-sc            3s
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: csi-pvc
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: csi-hostpath-sc # defined in csi-setup.yaml

The HostPath driver is configured to create new volumes under /tmp inside the hostpath container in the CSI hostpath plugin DaemonSet pod and thus persist as long as the DaemonSet pod itself. We can use such volumes in another pod like this:

$ kubectl create -f https://raw.githubusercontent.com/kubernetes-csi/docs/master/book/src/example/usage/csi-app.yaml
pod/my-csi-app created
$ kubectl get pods
NAME                         READY   STATUS    RESTARTS   AGE
csi-hostpath-attacher-0      1/1     Running   0          17m
csi-hostpath-provisioner-0   1/1     Running   0          19m
csi-hostpath-snapshotter-0   1/1     Running   0          16m
csi-hostpathplugin-9rp7c     2/2     Running   0          16m
my-csi-app                   1/1     Running   0          5s

$ kubectl describe pods/my-csi-app
Name:               my-csi-app
Namespace:          default
Priority:           0
PriorityClassName:  <none>
Node:               127.0.0.1/127.0.0.1
Start Time:         Wed, 03 Oct 2018 06:59:19 -0700
Labels:             <none>
Annotations:        <none>
Status:             Running
IP:                 172.17.0.5
Containers:
  my-frontend:
    Container ID:  docker://fd2950af39a155bdf08d1da341cfb23aa0d1af3eaaad6950a946355789606e8c
    Image:         busybox
    Image ID:      docker-pullable://busybox@sha256:2a03a6059f21e150ae84b0973863609494aad70f0a80eaeb64bddd8d92465812
    Port:          <none>
    Host Port:     <none>
    Command:
      sleep
      1000000
    State:          Running
      Started:      Wed, 03 Oct 2018 06:59:22 -0700
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /data from my-csi-volume (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-xms2g (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  my-csi-volume:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  csi-pvc
    ReadOnly:   false
  default-token-xms2g:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-xms2g
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason                  Age   From                     Message
  ----    ------                  ----  ----                     -------
  Normal  Scheduled               69s   default-scheduler        Successfully assigned default/my-csi-app to 127.0.0.1
  Normal  SuccessfulAttachVolume  69s   attachdetach-controller  AttachVolume.Attach succeeded for volume "pvc-0571cc14-c714-11e8-8911-000c2967769a"
  Normal  Pulling                 67s   kubelet, 127.0.0.1       pulling image "busybox"
  Normal  Pulled                  67s   kubelet, 127.0.0.1       Successfully pulled image "busybox"
  Normal  Created                 67s   kubelet, 127.0.0.1       Created container
  Normal  Started                 66s   kubelet, 127.0.0.1       Started container
kind: Pod
apiVersion: v1
metadata:
  name: my-csi-app
spec:
  containers:
    - name: my-frontend
      image: busybox
      volumeMounts:
      - mountPath: "/data"
        name: my-csi-volume
      command: [ "sleep", "1000000" ]
  volumes:
    - name: my-csi-volume
      persistentVolumeClaim:
        claimName: csi-pvc # defined in csi-pvs.yaml

Confirming the setup

Writing inside the app container should be visible in /tmp of the hostpath container:

$ kubectl exec -it my-csi-app /bin/sh
/ # touch /data/hello-world
/ # exit

$ kubectl exec -it $(kubectl get pods --selector app=csi-hostpathplugin -o jsonpath='{.items[*].metadata.name}') -c hostpath /bin/sh
/ # find / -name hello-world
/tmp/057485ab-c714-11e8-bb16-000c2967769a/hello-world
/ # exit

There should be a VolumeAttachment while the app has the volume mounted:

$ kubectl get VolumeAttachment
Name:         csi-a4e97f3af2161c6d081b8e96c58ed00c9bf1e1745e89b2545e24505437f015df
Namespace:
Labels:       <none>
Annotations:  <none>
API Version:  storage.k8s.io/v1beta1
Kind:         VolumeAttachment
Metadata:
  Creation Timestamp:  2018-10-03T13:59:19Z
  Resource Version:    1730
  Self Link:           /apis/storage.k8s.io/v1beta1/volumeattachments/csi-a4e97f3af2161c6d081b8e96c58ed00c9bf1e1745e89b2545e24505437f015df
  UID:                 862d7241-c714-11e8-8911-000c2967769a
Spec:
  Attacher:   csi-hostpath
  Node Name:  127.0.0.1
  Source:
    Persistent Volume Name:  pvc-0571cc14-c714-11e8-8911-000c2967769a
Status:
  Attached:  true
Events:      <none>

Snapshot support

Enable dynamic provisioning of volume snapshot by creating a volume snapshot class as follows:

$ kubectl create -f https://raw.githubusercontent.com/kubernetes-csi/docs/master/book/src/example/snapshot/csi-snapshotclass.yaml
volumesnapshotclass.snapshot.storage.k8s.io/csi-hostpath-snapclass created
$ kubectl get volumesnapshotclass
NAME                     AGE
csi-hostpath-snapclass   11s
$ kubectl describe volumesnapshotclass
Name:         csi-hostpath-snapclass
Namespace:
Labels:       <none>
Annotations:  <none>
API Version:  snapshot.storage.k8s.io/v1alpha1
Kind:         VolumeSnapshotClass
Metadata:
  Creation Timestamp:  2018-10-03T14:15:30Z
  Generation:          1
  Resource Version:    2418
  Self Link:           /apis/snapshot.storage.k8s.io/v1alpha1/volumesnapshotclasses/csi-hostpath-snapclass
  UID:                 c8f5bc47-c716-11e8-8911-000c2967769a
Snapshotter:           csi-hostpath
Events:                <none>
apiVersion: snapshot.storage.k8s.io/v1alpha1
kind: VolumeSnapshotClass
metadata:
  name: csi-hostpath-snapclass
snapshotter: csi-hostpath

Use the volume snapshot class to dynamically create a volume snapshot:

$ kubectl create -f https://raw.githubusercontent.com/kubernetes-csi/docs/master/book/src/example/snapshot/csi-snapshot.yaml
volumesnapshot.snapshot.storage.k8s.io/new-snapshot-demo created

$ kubectl get volumesnapshot
NAME                AGE
new-snapshot-demo   12s

$ kubectl get volumesnapshotcontent
NAME                                               AGE
snapcontent-f55db632-c716-11e8-8911-000c2967769a   14s

$ kubectl describe volumesnapshot
Name:         new-snapshot-demo
Namespace:    default
Labels:       <none>
Annotations:  <none>
API Version:  snapshot.storage.k8s.io/v1alpha1
Kind:         VolumeSnapshot
Metadata:
  Creation Timestamp:  2018-10-03T14:16:45Z
  Generation:          1
  Resource Version:    2476
  Self Link:           /apis/snapshot.storage.k8s.io/v1alpha1/namespaces/default/volumesnapshots/new-snapshot-demo
  UID:                 f55db632-c716-11e8-8911-000c2967769a
Spec:
  Snapshot Class Name:    csi-hostpath-snapclass
  Snapshot Content Name:  snapcontent-f55db632-c716-11e8-8911-000c2967769a
  Source:
    Kind:  PersistentVolumeClaim
    Name:  csi-pvc
Status:
  Creation Time:  2018-10-03T14:16:45Z
  Ready:          true
  Restore Size:   1Gi
Events:           <none>

$ kubectl describe volumesnapshotcontent
Name:         snapcontent-f55db632-c716-11e8-8911-000c2967769a
Namespace:
Labels:       <none>
Annotations:  <none>
API Version:  snapshot.storage.k8s.io/v1alpha1
Kind:         VolumeSnapshotContent
Metadata:
  Creation Timestamp:  2018-10-03T14:16:45Z
  Generation:          1
  Resource Version:    2474
  Self Link:           /apis/snapshot.storage.k8s.io/v1alpha1/volumesnapshotcontents/snapcontent-f55db632-c716-11e8-8911-000c2967769a
  UID:                 f561411f-c716-11e8-8911-000c2967769a
Spec:
  Csi Volume Snapshot Source:
    Creation Time:    1538576205471577525
    Driver:           csi-hostpath
    Restore Size:     1073741824
    Snapshot Handle:  f55ff979-c716-11e8-bb16-000c2967769a
  Persistent Volume Ref:
    API Version:        v1
    Kind:               PersistentVolume
    Name:               pvc-0571cc14-c714-11e8-8911-000c2967769a
    Resource Version:   1573
    UID:                0575b966-c714-11e8-8911-000c2967769a
  Snapshot Class Name:  csi-hostpath-snapclass
  Volume Snapshot Ref:
    API Version:       snapshot.storage.k8s.io/v1alpha1
    Kind:              VolumeSnapshot
    Name:              new-snapshot-demo
    Namespace:         default
    Resource Version:  2472
    UID:               f55db632-c716-11e8-8911-000c2967769a
Events:                <none>
apiVersion: snapshot.storage.k8s.io/v1alpha1
kind: VolumeSnapshot
metadata:
  name: new-snapshot-demo
spec:
  snapshotClassName: csi-hostpath-snapclass
  source:
    name: csi-pvc 
    kind: PersistentVolumeClaim

Restore volume from snapshot support

Follow the following example to create a volume from a volume snapshot:

$ kubectl create -f https://raw.githubusercontent.com/kubernetes-csi/docs/master/book/src/example/snapshot/csi-restore.yaml
persistentvolumeclaim/hpvc-restore created
$ kubectl get pvc
NAME           STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
csi-pvc        Bound    pvc-0571cc14-c714-11e8-8911-000c2967769a   1Gi        RWO            csi-hostpath-sc   24m
hpvc-restore   Bound    pvc-77324684-c717-11e8-8911-000c2967769a   1Gi        RWO            csi-hostpath-sc   6s
$ kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                  STORAGECLASS      REASON   AGE
pvc-0571cc14-c714-11e8-8911-000c2967769a   1Gi        RWO            Delete           Bound    default/csi-pvc        csi-hostpath-sc            25m
pvc-77324684-c717-11e8-8911-000c2967769a   1Gi        RWO            Delete           Bound    default/hpvc-restore   csi-hostpath-sc            33s
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: hpvc-restore
spec:
  storageClassName: csi-hostpath-sc
  dataSource:
    name: new-snapshot-demo
    kind: VolumeSnapshot
    apiGroup: snapshot.storage.k8s.io
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi


If you encounter any problems, please check the [Troubleshooting page](Troubleshooting.html).

Development

This section describes to developers how to create and deploy a CSI driver for Kubernetes.

Developing a CSI driver

To write a CSI Driver, a developer must create an application which implements the three Identity, Controller, and Node services as described in the CSI specification.

The Drivers page contains a set of drivers which may be used as an example of how to write a CSI driver.

If this is your first driver, you can start with the in-memory sample Mock Driver used for csi-sanity

Other Resources

Here are some other resources useful for writing CSI drivers:

Implement Snapshot Feature

To implement the snapshot feature, a CSI driver needs to support controller capabilities CREATE_DELETE_SNAPSHOT and LIST_SNAPSHOTS, and implement controller RPCs CreateSnapshot, DeleteSnapshot, and ListSnapshots. For details, see the CSI spec here.

Here are some example CSI plugins that have implemented the snapshot feature:

You can find more sample and production CSI drivers here. Please note that drivers may or may not have implemented the snapshot feature.

Snapshot APIs

The volume snapshot APIs are implemented as CRDs here. Once you deploy the CSI sidecar containers which includes the external snapshotter in your cluster, the external-snapshotter will pre-install the Snapshot CRDs.

Enable VolumeSnapshotDataSource Feature Gate

Since volume snapshot is an alpha feature in Kubernetes v1.12, you need to enable a new alpha feature gate called VolumeSnapshotDataSource in API server binary.

--feature-gates=VolumeSnapshotDataSource=true

Deploy External-Snapshotter with CSI Driver

The snapshot controller is implemented as a sidecar helper container called External-Snapshotter. External-Snapshotter watches VolumeSnapshot and VolumeSnapshotContent API objects and triggers CreateSnapshot and DeleteSnapshot operations.

It is recommended that sidecar containers External-Snapshotter and External-Provisioner be deployed together with CSI driver in a StatefulSet. See this example yaml file which deploys External-Snapshotter and External-Provisioner with the Hostpath CSI driver. Run the following command to start the sidecar containers and the CSI driver:

kubectl create -f setup-csi-snapshotter.yaml

Test Snapshot Feature

Use the following example yaml files to test the snapshot feature.

Create a StorageClass:

kubectl create -f storageclass.yaml

Create a PVC:

kubectl create -f pvc.yaml

Create a VolumeSnapshotClass:

kubectl create -f snapshotclass.yaml

Create a VolumeSnapshot:

kubectl create -f snapshot.yaml

Create a PVC from a VolumeSnapshot:

kuberctl create -f restore.yaml

PVC not Bound

If a PVC is not bound, the attempt to create a volume snapshot from that PVC will fail. No retries will be attempted. An event will be logged to indicate that the PVC is not bound.

Note that this could happen if the PVC spec and the VolumeSnapshot spec are in the same yaml file. In this case, when the VolumeSnapshot object is created, the PVC object is created but volume creation is not complete and therefore PVC is not bound yet. You need to wait until the PVC is bound and try to create the snapshot again.

Deploying in Kubernetes

This page describes to CSI driver developers how to deploy their driver onto a Kubernetes cluster.

Overview

There are three components plus the kubelet that enable CSI drivers to provide storage to Kubernetes. These components are sidecar containers which are responsible for communication with both Kubernetes and the CSI driver, making the appropriate CSI calls for their respectful Kubernetes events.

Sidecar Containers

sidecar-container

Sidecar containers manage Kubernetes events and make the appropriate calls to the CSI driver. These are the external attacher, external provisioner, and the driver registrar.

External Attacher

external-attacher is a sidecar container that watches Kubernetes VolumeAttachment objects and triggers CSI ControllerPublish and ControllerUnpublish operations against a driver endpoint. As of this writing, the external attacher does not support leader election and therefore there can be only one running per CSI driver. For more information please read Attaching and Detaching.

Note, even though this is called the external attacher, its function is to call the CSI API calls ControllerPublish and ControllerUnpublish. These calls most likely will occur in a node which is not the one that will mount the volume. For this reason, many CSI drivers do not support these calls, instead doing the attach/detach and mount/unmount both in the CSI NodePublish and NodeUnpublish calls done by the kubelet at the node which is supposed to mount.

External Provisioner

external-provisioner is a Sidecar container that watches Kubernetes PersistentVolumeClaim objects and triggers CSI CreateVolume and DeleteVolume operations against a driver endpoint. For more information please read Provisioning and Deleting.

External Snapshotter

external-snapshotter is a Sidecar container that watches Kubernetes VolumeSnapshot objects and triggers CSI CreateSnapshot and DeleteSnapshot operations against a driver endpoint. For more information please read Snapshot Design Proposal.

Driver Registrar

driver-registrar is a sidecar container that registers the CSI driver with kubelet, and adds the drivers custom NodeId to a label on the Kubernetes Node API Object. It does this by communicating with the Identity service on the CSI driver and also calling the CSI GetNodeId operation. The driver registrar must have the Kubernetes name for the node set through the environment variable KUBE_NODE_NAME as follows:

        - name: csi-driver-registrar
          imagePullPolicy: Always
          image: quay.io/k8scsi/driver-registrar:v0.2.0
          args:
            - "--v=5"
            - "--csi-address=$(ADDRESS)"
          env:
            - name: ADDRESS
              value: /csi/csi.sock
            - name: KUBE_NODE_NAME
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
          volumeMounts:
            - name: socket-dir
              mountPath: /csi

Kubelet

kubelet

The Kubernetes kubelet runs on every node and is responsible for making the CSI calls NodePublish and NodeUnpublish. These calls mount and unmount the storage volume from the storage system, making it available to the Pod to consume. As shown in the external-attacher, most CSI drivers choose to implement both their attach/detach and mount/unmount calls in the NodePublish and NodeUnpublish calls. They do this because the kubelet makes the request on the node which is to consume the volume.

Mount point

The mount point used by the CSI driver must be set to Bidirectional. See the example below:

          volumeMounts:
            - name: socket-dir
              mountPath: /csi
            - name: mountpoint-dir
              mountPath: /var/lib/kubelet/pods
              mountPropagation: "Bidirectional"
      volumes:
        - name: socket-dir
          hostPath:
            path: /var/lib/kubelet/plugins/csi-hostpath
            type: DirectoryOrCreate
        - name: mountpoint-dir
          hostPath:
            path: /var/lib/kubelet/pods
            type: Directory

RBAC Rules

Side car containers need the appropriate permissions to be able to access and manipulate Kubernetes objects. Here are the RBAC rules needed:

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: csi-hostpath-role
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["create", "delete", "get", "list", "watch", "update"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["volumeattachments"]
    verbs: ["get", "list", "watch", "update"]

Deploying

Deploying a CSI driver onto Kubernetes is highlighted in detail in Recommended Mechanism for Deploying CSI Drivers on Kubernetes.

Examples

  • Simple deployment example using a single pod for all components: see the hostpath example.
  • Full deployment example using a DaemonSet for the node plugin and StatefulSet for the controller plugin: check the NFS driver deployment files.

More information

For more information, please read CSI Volume Plugins in Kubernetes Design Doc.

Testing

This document describes to developers how they can test their CSI clients or drivers.

Testing CSI Clients

If you are writing a CSI client, like a CO or a side car container, then you can use some of the following methods to test your application.

  • csi-test unit test mock driver: The csi-test repo provides an automatically generated Golang mock code to be used for unit tests.
  • mock-driver: This driver can be used as an external service to test your gRPC calls.
  • hostPath driver: This driver can be used on a single node to tests for mounting and unmounting of storage.

CSI-Test Unit Test Mock Driver

The csi-test unit test mock driver enables Golang clients to test all aspects of their code. This is done by using the mock driver generated using GoMock, which let's the caller verify parameters and test for returned values. Here is a small example:

    // Setup mock
    m := gomock.NewController(&mock_utils.SafeGoroutineTester{})
    defer m.Finish()
    driver := mock_driver.NewMockIdentityServer(m)

    // Setup input
    in := &csi.GetPluginInfoRequest{
        Version: &csi.Version{
            Major: 0,
            Minor: 1,
            Patch: 0,
        },
    }

    // Setup mock outout
    out := &csi.GetPluginInfoResponse{
        Name:          "mock",
        VendorVersion: "0.1.1",
        Manifest: map[string]string{
            "hello": "world",
        },
    }

    // Setup expectation
    // !IMPORTANT!: Must set context expected value to gomock.Any() to match any value
    driver.EXPECT().GetPluginInfo(gomock.Any(), in).Return(out, nil).Times(1)

    // Create a new RPC
    server := mock_driver.NewMockCSIDriver(&mock_driver.MockCSIDriverServers{
        Identity: driver,
    })
    conn, err := server.Nexus()
    if err != nil {
        t.Errorf("Error: %s", err.Error())
    }
    defer server.Close()

    // Make call
    c := csi.NewIdentityClient(conn)
    r, err := c.GetPluginInfo(context.Background(), in)
    if err != nil {
        t.Errorf("Error: %s", err.Error())
    }

    name := r.GetName()
    if name != "mock" {
        t.Errorf("Unknown name: %s\n", name)
    }

More Information

For more examples and information see:

HostPath Driver

The hostPath driver is probably the simplest CSI driver to use for testing on a single node. This is the driver that is for CSI e2e tests in Kubernetes. See the Example page for deployment and usage instructions.

Testing CSI Drivers

There are multiple ways to test your driver, some still in development. This page will describe each of the multiple methods to test your driver.

Unit Testing

There are multiple ways to test your driver. One way is to exercise every call by writing your own client for your unit tests as done in the Portworx driver.

Another way to test your driver is to use the sanity package from csi-test. This simple package contains a single call which will test your driver according to the CSI specification. Here is an example of how it can be used:

func TestMyDriver(t *testing.T) {
    // Setup the full driver and its environment
    ... setup driver ...

    // Now call the test suite
    sanity.Test(t, driverEndpointAddress)
}

Functional Testing

For functional testing you can again provide your own model, or some of the following tools:

csi-sanity

csi-sanity is a program from csi-test which tests your driver based on the sanity package.

Here is a sample way to use it:

$ csi-sanity --ginkgo.v --csi.endpoint=<your csi driver endpoint>

For more information please see csi-sanity

Troubleshooting

Node plugin pod does not start with RunContainerError status

kubectl describe pod your-nodeplugin-pod shows:

failed to start container "your-driver": Error response from daemon:
linux mounts: Path /var/lib/kubelet/pods is mounted on / but it is not a shared mount

Your Docker host is not configured to allow shared mounts. Take a look at this page for instructions to enable them.

External attacher can't find VolumeAttachments

If you have a Kubernetes 1.9 cluster, not being able to list VolumeAttachment and the following error are due to the lack of the storage.k8s.io/v1alpha1=true runtime configuration:

$ kubectl logs csi-pod external-attacher
...
I0306 16:34:50.976069       1 reflector.go:240] Listing and watching *v1alpha1.VolumeAttachment from github.com/kubernetes-csi/external-attacher/vendor/k8s.io/client-go/informers/factory.go:86

E0306 16:34:50.992034       1 reflector.go:205] github.com/kubernetes-csi/external-attacher/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1alpha1.VolumeAttachment: the server could not find the requested resource
...

Please see the Kubernetes 1.9 page.

Problems with the external components

The external components images are under active development. It can happen that they become incompatible with each other. If the issues above above have been ruled out, contact the sig-storage team and/or run the e2e test:

go run hack/e2e.go -- --provider=local --test --test_args="--ginkgo.focus=Feature:CSI"

References

Archive

In this section, you will find information about CSI support in older Kubernetes versions.

CSI with Kubernetes 1.9

Since CSI support is alpha in Kubernetes 1.9, the following flags must be set explictly:

  • API Server binary:
--allow-privileged=true
--feature-gates=CSIPersistentVolume=true,MountPropagation=true
--runtime-config=storage.k8s.io/v1alpha1=true
  • Controller-manager binary
--feature-gates=CSIPersistentVolume=true
  • Kubelet
--allow-privileged=true
--feature-gates=CSIPersistentVolume=true,MountPropagation=true

Developers

If you are a developer and are using the script cluster/kube-up.sh from the Kubernetes repo, then you can set values using the following environment variables:

export KUBE_RUNTIME_CONFIG="storage.k8s.io/v1alpha1=true"
export KUBE_FEATURE_GATES="MountPropagation=true,CSIPersistentVolume=true"

When using the script hack/local-up-cluster.sh, set the same variables without the KUBE_ prefix:

export RUNTIME_CONFIG="storage.k8s.io/v1alpha1=true"
export FEATURE_GATES="MountPropagation=true,CSIPersistentVolume=true"

Confirming the setup

Once the system is up, to confirm if the runtime config has taken effect, the following command should return that there are no resources and not return an error:

$ kubectl get volumeattachments

To confirm that the feature gate has taken effect, submit the following fake PersistentVolume specification. If it is accepted, then we can confirm that the feature gate has been set correctly, and you may go ahead and delete it:

apiVersion: v1
kind: PersistentVolume
metadata:
    name: fakepv
spec:
    capacity:
        storage: 1Gi
    accessModes:
        - ReadWriteMany
    csi:
        driver: fake
        volumeHandle: "1"
        readOnly: false

CSI with Kubernetes 1.10

TBD