Introduction

Kubernetes Container Storage Interface (CSI) Documentation

This site documents how to develop, deploy, and test a Container Storage Interface (CSI) driver on Kubernetes.

The Container Storage Interface (CSI) is a standard for exposing arbitrary block and file storage systems to containerized workloads on Container Orchestration Systems (COs) like Kubernetes. Using CSI third-party storage providers can write and deploy plugins exposing new storage systems in Kubernetes without ever having to touch the core Kubernetes code.

The target audience for this site is third-party developers interested in developing CSI drivers for Kubernetes.

Kubernetes users interested in how to deploy or manage an existing CSI driver on Kubernetes should look at the documentation provided by the author of the CSI driver.

Kubernetes users interested in how to use a CSI driver should look at kubernetes.io documentation.

Kubernetes Releases

KubernetesCSI Spec CompatibilityStatus
v1.9v0.1.0Alpha
v1.10v0.2.0Beta
v1.11v0.3.0Beta
v1.13v0.3.0, v1.0.0GA

Development and Deployment

Minimum Requirements (for Developing and Deploying a CSI driver for Kubernetes)

Kubernetes is as minimally prescriptive about packaging and deployment of a CSI Volume Driver as possible.

The only requirements are around how Kubernetes (master and node) components find and communicate with a CSI driver.

Specifically, the following is dictated by Kubernetes regarding CSI:

  • Kubelet to CSI Driver Communication
    • Kubelet directly issues CSI calls (like NodeStageVolume, NodePublishVolume, etc.) to CSI drivers via a Unix Domain Socket to mount and unmount volumes.
    • Kubelet discovers CSI drivers (and the Unix Domain Socket to use to interact with a CSI driver) via the kubelet plugin registration mechanism.
    • Therefore, all CSI drivers deployed on Kubernetes MUST register themselves using the kubelet plugin registration mechanism on each supported node.
  • Master to CSI Driver Communication
    • Kubernetes master components do not communicate directly (via a Unix Domain Socket or otherwise) with CSI drivers.
    • Kubernetes master components interact only with the Kubernetes API.
    • Therefore, CSI drivers that require operations that depend on the Kubernetes API (like volume create, volume attach, volume snapshot, etc.) MUST watch the Kubernetes API and trigger the appropriate CSI operations against it.

Because these requirements are minimally prescriptive, CSI driver developers are free to implement and deploy their drivers as they see fit.

That said, to ease development and deployment, the mechanism described below is recommended.

Recommended Mechanism (for Developing and Deploying a CSI driver for Kubernetes)

The Kubernetes development team has established a "Recommended Mechanism" for developing, deploying, and testing CSI Drivers on Kubernetes. It aims to reduce boilerplate code and simplify the overall process for CSI Driver developers.

This "Recommended Mechanism" makes use of the following components:

To implement a CSI driver using this mechanism, a CSI driver developer should:

  1. Create a containerized application implementing the Identity, Node, and optionally the Controller services described in the CSI specification (the CSI driver container).
  2. Unit test it using csi-sanity.
  3. Define Kubernetes API YAML files that deploy the CSI driver container along with appropriate sidecar containers.
  4. Deploy the driver on a Kubernetes cluster and run end-to-end functional tests on it.

Reference Links

Developing CSI Driver for Kubernetes

Remain Informed

All developers of CSI drivers should join https://groups.google.com/forum/#!forum/container-storage-interface-drivers-announce to remain informed about changes to CSI or Kubernetes that may affect existing CSI drivers.

Overview

The first step to creating a CSI driver is writing an application implementing the gRPC services described in the CSI specification

At a minimum, CSI drivers must implement the following CSI services:

  • CSI Identity service
    • Enables callers (Kubernetes components and CSI sidecar containers) to identify the driver and what optional functionality it supports.
  • CSI Node service
    • Only NodePublishVolume, NodeUnpublishVolume, and NodeGetCapabilities are required.
    • Required methods enable callers to make a volume available at a specified path and discover what optional functionality the driver supports.

All CSI services may be implemented in the same CSI driver application. The CSI driver application should be containerized to make it easy to deploy on Kubernetes. Once containerized, the CSI driver can be paired with CSI Sidecar Containers and deployed in node and/or controller mode as appropriate.

Capabilities

If your driver supports additional features, CSI "capabilities" can be used to advertise the optional methods/services it supports, for example:

  • CONTROLLER_SERVICE (PluginCapability)
    • The entire CSI Controller service is optional. This capability indicates the driver implement one or more of the methods in the CSI Controller service.
  • VOLUME_ACCESSIBILITY_CONSTRAINTS (PluginCapability)
    • This capability indicates the volumes for this driver may not be equally accessible from all nodes in the cluster, and that the driver will return additional topology related information that Kubernetes can use to schedule workloads more intelligently or influence where a volume will be provisioned.
  • VolumeExpansion (PluginCapability)
    • This capability indicates the driver supports resizing (expanding) volumes after creation.
  • CREATE_DELETE_VOLUME (ControllerServiceCapability)
    • This capability indicates the driver supports dynamic volume provisioning and deleting.
  • PUBLISH_UNPUBLISH_VOLUME (ControllerServiceCapability)
    • This capability indicates the driver implements ControllerPublishVolume and ControllerUnpublishVolume -- operations that correspond to the Kubernetes volume attach/detach operations. This may, for example, result in a "volume attach" operation against the Google Cloud control plane to attach the specified volume to the specified node for the Google Cloud PD CSI Driver.
  • CREATE_DELETE_SNAPSHOT (ControllerServiceCapability)
    • This capability indicates the driver supports provisioning volume snapshots and the ability to provision new volumes using those snapshots.
  • CLONE_VOLUME (ControllerServiceCapability)
    • This capability indicates the driver supports cloning of volumes.
  • STAGE_UNSTAGE_VOLUME (NodeServiceCapability)
    • This capability indicates the driver implements NodeStageVolume and NodeUnstageVolume -- operations that correspond to the Kubernetes volume device mount/unmount operations. This may, for example, be used to create a global (per node) volume mount of a block storage device.

This is an partial list, please see the CSI spec for a complete list of capabilities. Also see the Features section to understand how a feature integrates with Kubernetes.

Versioning, Support, and Kubernetes Compatibility

Versioning

Each Kubernetes CSI component version is expressed as x.y.z, where x is the major version, y is the minor version, and z is the patch version, following Semantic Versioning.

Patch version releases only contain bug fixes that do not break any backwards compatibility.

Minor version releases may contain new functionality that do not break any backwards compatibility (except for alpha features).

Major version releases may contain new functionality or fixes that may break backwards compatibility with previous major releases. Changes that require a major version increase include: removing or changing API, flags, or behavior, new RBAC requirements that are not opt-in, new Kubernetes minimum version requirements.

A litmus test for not breaking compatibility is to replace the image of a component in an existing deployment without changing that deployment in any other way.

To minimize the number of branches we need to support, we do not have a general policy for releasing new minor versions on older majors. We will make exceptions for work related to meeting production readiness requirements. Only the previous major version will be eligible for these exceptions, so long as the time between the previous major version and the current major version is under six months. For example, if "X.0.0" and "X+1.0.0" were released under six months apart, "X.0.0" would be eligible for new minor releases.

Support

The Kubernetes CSI project follows the broader Kubernetes project on support. Every minor release branch will be supported with patch releases on an as-needed basis for at least 1 year, starting with the first release of that minor version. In addition, the minor release branch will be supported for at least 3 months after the next minor version is released, to allow time to integrate with the latest release.

Alpha Features

Alpha features are subject to break or be removed across Kubernetes and CSI component releases. There is no guarantee alpha features will continue to function if upgrading the Kubernetes cluster or upgrading a CSI sidecar or controller.

Kubernetes Compatibility

Each release of a CSI component has a minimum, maximum and recommended Kubernetes version that it is compatible with.

Minimum Version

Minimum version specifies the lowest Kubernetes version where the component will function with the most basic functionality, and no additional features added later. Generally, this aligns with the Kubernetes version where that CSI spec version was added.

Maximum Version

The maximum Kubernetes version specifies the last working Kubernetes version for all beta and GA features that the component supports. This generally aligns with one Kubernetes release before support for the CSI spec version was removed or if a particular Kubernetes API or feature was removed.

Recommended Version

Note that any new features added may have dependencies on Kubernetes versions greater than the minimum Kubernetes version. The recommended Kubernetes version specifies the lowest Kubernetes version needed where all its supported features will function correctly. Trying to use a new sidecar feature on a Kubernetes cluster below the recommended Kubernetes version may fail to function correctly. For that reason, it is encouraged to stay as close to the recommended Kubernetes version as possible.

For more details on which features are supported with which Kubernetes versions and their corresponding CSI components, please see each feature's individual page.

Kubernetes Changelog

This page summarizes major CSI changes made in each Kubernetes release. For details on individual features, visit the Features section.

Kubernetes 1.28

Features

Kubernetes 1.27

Features

Kubernetes 1.26

Features

  • GA
    • Delegate fsgroup to CSI driver
    • Azure File CSI migration
    • vSphere CSI migration
  • Alpha
    • Cross namespace volume provisioning

Kubernetes 1.25

Features

Deprecation

  • In-tree plugin removal:
    • AWS EBS
    • Azure Disk

Kubernetes 1.24

Features

  • GA
    • Volume expansion
    • Storage capacity tracking
    • Azure Disk CSI Migration
    • OpenStack Cinder CSI Migration
  • Beta
    • Volume populator
  • Alpha
    • SELinux relabeling with mount options
    • Prevent volume mode conversion

Kubernetes 1.23

Features

  • GA
    • CSI fsgroup policy
    • Non-recusrive fsgroup ownership
    • Generic ephemeral volumes
  • Beta
    • Delegate fsgroup to CSI driver
    • Azure Disk CSI Migration (on-by-default)
    • AWS EBS CSI Migration (on-by-default)
    • GCE PD CSI Migration (on-by-default)
  • Alpha
    • Recover from Expansion Failure
    • Honor PV Reclaim Policy
    • RBD CSI Migration
    • Portworx CSI migration

Kubernetes 1.22

Features

  • GA
    • Windows CSI (CSI-Proxy API v1)
    • Pod token requests (CSIServiceAccountToken)
  • Alpha
    • ReadWriteOncePod access mode
    • Delegate fsgroup to CSI driver
    • Generic data populators

Kubernetes 1.21

Features

  • Beta
    • Pod token requests (CSIServiceAccountToken)
    • Storage capacity tracking
    • Generic ephemeral volumes

Kubernetes 1.20

Breaking Changes

  • Kubelet no longer creates the target_path for NodePublishVolume in accordance with the CSI spec. Kubelet also no longer checks if staging and target paths are mounts or corrupted. CSI drivers need to be idempotent and do any necessary mount verification.

Features

  • GA
    • Volume snapshots and restore
  • Beta
    • CSI fsgroup policy
    • Non-recusrive fsgroup ownership
  • Alpha
    • Pod token requests (CSIServiceAccountToken)

Kubernetes 1.19

Deprecations

  • Behaviour of NodeExpandVolume being called between NodeStage and NodePublish is deprecated for CSI volumes. CSI drivers should support calling NodeExpandVolume after NodePublish if they have node EXPAND_VOLUME capability

Features

  • Beta
    • CSI on Windows
    • CSI migration for AzureDisk and vSphere drivers
  • Alpha
    • CSI fsgroup policy
    • Generic ephemeral volumes
    • Storage capacity tracking
    • Volume health monitoring

Kubernetes 1.18

Deprecations

  • storage.k8s.io/v1beta1 CSIDriver object has been deprecated and will be removed in a future release.
  • In a future release, kubelet will no longer create the CSI NodePublishVolume target directory, in accordance with the CSI specification. CSI drivers may need to be updated accordingly to properly create and process the target path.

Features

  • GA
    • Raw block volumes
    • Volume cloning
    • Skip attach
    • Pod info on mount
  • Beta
    • CSI migration for Openstack cinder driver.
  • Alpha
    • CSI on Windows
  • storage.k8s.io/v1 CSIDriver object introduced.

Kubernetes 1.17

Breaking Changes

  • CSI 0.3 support has been removed. CSI 0.3 drivers will no longer function.

Deprecations

  • storage.k8s.io/v1beta1 CSINode object has been deprecated and will be removed in a future release.

Features

  • GA
    • Volume topology
    • Volume limits
  • Beta
    • Volume snapshots and restore
    • CSI migration for AWS EBS and GCE PD drivers
  • storage.k8s.io/v1 CSINode object introduced.

Kubernetes 1.16

Features

  • Beta
    • Volume cloning
    • Volume expansion
    • Ephemeral local volumes

Kubernetes 1.15

Features

  • Volume capacity usage metrics
  • Alpha
    • Volume cloning
    • Ephemeral local volumes
    • Resizing secrets

Kubernetes 1.14

Breaking Changes

  • csi.storage.k8s.io/v1alpha1 CSINodeInfo and CSIDriver CRDs are no longer supported.

Features

  • Beta
    • Topology
    • Raw block
    • Skip attach
    • Pod info on mount
  • Alpha
    • Volume expansion
  • storage.k8s.io/v1beta1 CSINode and CSIDriver objects introduced.

Kubernetes 1.13

Deprecations

  • CSI spec 0.2 and 0.3 are deprecated and support will be removed in Kubernetes 1.17.

Features

Kubernetes 1.12

Breaking Changes

Kubelet device plugin registration is enabled by default, which requires CSI plugins to use driver-registrar:v0.3.0 to register with kubelet.

Features

  • Alpha
    • Snapshots
    • Topology
    • Skip attach
    • Pod info on mount
  • csi.storage.k8s.io/v1alpha1 CSINodeInfo and CSIDriver CRDs were introduced and have to be installed before deploying a CSI driver.

Kubernetes 1.11

Features

Kubernetes 1.10

Breaking Changes

  • CSI spec 0.1 is no longer supported.

Features

  • Beta support added for CSI spec 0.2. This added optional NodeStageVolume and NodeUnstageVolume calls which map to Kubernetes MountDevice and UnmountDevice operations.

Kubernetes 1.9

Features

Kubernetes Cluster Controllers

The Kubernetes cluster controllers are responsible for managing snapshot objects and operations across multiple CSI drivers, so they should be bundled and deployed by the Kubernetes distributors as part of their Kubernetes cluster management process (independent of any CSI Driver).

The Kubernetes development team maintains the following Kubernetes cluster controllers:

Snapshot Controller

Status and Releases

Git Repository: https://github.com/kubernetes-csi/external-snapshotter

Status: GA v4.0.0+

When Volume Snapshot is promoted to Beta in Kubernetes 1.17, the CSI external-snapshotter sidecar controller is split into two controllers: a snapshot-controller and a CSI external-snapshotter sidecar. See the following table for snapshot-controller release information.

Supported Versions

Latest stable releaseBranchMin CSI VersionMax CSI VersionContainer ImageMin K8s VersionMax K8s VersionRecommended K8s Version
external-snapshotter v6.3.0release-6.2v1.0.0-registry.k8s.io/sig-storage/csi-snapshotter:v6.2.1v1.20-v1.24
external-snapshotter v6.2.2release-6.2v1.0.0-registry.k8s.io/sig-storage/csi-snapshotter:v6.2.1v1.20-v1.24

Unsupported Versions

Latest stable releaseBranchMin CSI VersionMax CSI VersionContainer ImageMin K8s VersionMax K8s VersionRecommended K8s Version
external-snapshotter v6.1.0release-6.1v1.0.0-registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0v1.20-v1.24
external-snapshotter v6.0.1release-6.0v1.0.0-registry.k8s.io/sig-storage/snapshot-controller:v6.0.1v1.20-v1.24
external-snapshotter v5.0.1release-5.0v1.0.0-registry.k8s.io/sig-storage/snapshot-controller:v5.0.1v1.20-v1.22
external-snapshotter v4.2.1release-4.2v1.0.0-registry.k8s.io/sig-storage/snapshot-controller:v4.2.1v1.20-v1.22
external-snapshotter v4.1.1release-4.1v1.0.0-registry.k8s.io/sig-storage/snapshot-controller:v4.1.1v1.20-v1.20
external-snapshotter v4.0.1release-4.0v1.0.0-registry.k8s.io/sig-storage/snapshot-controller:v4.0.1v1.20-v1.20
external-snapshotter v3.0.3 (beta)release-3.0v1.0.0-registry.k8s.io/sig-storage/snapshot-controller:v3.0.3v1.17-v1.17
external-snapshotter v2.1.4 (beta)release-2.1v1.0.0-registry.k8s.io/sig-storage/snapshot-controller:v2.1.4v1.17-v1.17

For more information on the CSI external-snapshotter sidecar, see this external-snapshotter page.

Description

The snapshot controller will be watching the Kubernetes API server for VolumeSnapshot and VolumeSnapshotContent CRD objects. The CSI external-snapshotter sidecar only watches the Kubernetes API server for VolumeSnapshotContent CRD objects. The snapshot controller will be creating the VolumeSnapshotContent CRD object which triggers the CSI external-snapshotter sidecar to create a snapshot on the storage system.

For detailed snapshot beta design changes, see the design doc here.

For detailed information about volume snapshot and restore functionality, see Volume Snapshot & Restore.

For detailed information (binary parameters, RBAC rules, etc.), see https://github.com/kubernetes-csi/external-snapshotter/blob/release-6.2/README.md.

Deployment

Kubernetes distributors should bundle and deploy the controller and CRDs as part of their Kubernetes cluster management process (independent of any CSI Driver).

If your cluster does not come pre-installed with the correct components, you may manually install these components by executing the following steps.

git clone https://github.com/kubernetes-csi/external-snapshotter/
cd ./external-snapshotter
git checkout release-6.2
kubectl kustomize client/config/crd | kubectl create -f -
kubectl -n kube-system kustomize deploy/kubernetes/snapshot-controller | kubectl create -f -

Snapshot Validation Webhook

Status and Releases

Git Repository: https://github.com/kubernetes-csi/external-snapshotter

Status: GA as of 4.0.0

There is a new validating webhook server which provides tightened validation on snapshot objects. This SHOULD be installed by the Kubernetes distros along with the snapshot-controller, not end users. It SHOULD be installed in all Kubernetes clusters that has the snapshot feature enabled.

Supported Versions

Latest stable releaseBranchMin CSI VersionMax CSI VersionContainer ImageMin K8s VersionMax K8s VersionRecommended K8s Version
external-snapshotter v6.3.0release-6.2v1.0.0-registry.k8s.io/sig-storage/csi-snapshotter:v6.2.1v1.20-v1.24
external-snapshotter v6.2.2release-6.2v1.0.0-registry.k8s.io/sig-storage/csi-snapshotter:v6.2.1v1.20-v1.24

Unsupported Versions

Latest stable releaseBranchMin CSI VersionMax CSI VersionContainer ImageMin K8s VersionMax K8s VersionRecommended K8s Version
external-snapshotter v6.1.0release-6.1v1.0.0-registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0v1.20-v1.24
snapshot-validation-webhook v6.0.1release-6.0v1.0.0-registry.k8s.io/sig-storage/snapshot-validation-webhook:v6.0.1v1.20-v1.24
snapshot-validation-webhook v5.0.1release-5.0v1.0.0-registry.k8s.io/sig-storage/snapshot-validation-webhook:v5.0.1v1.20-v1.22
snapshot-validation-webhook v4.2.1release-4.2v1.0.0-registry.k8s.io/sig-storage/snapshot-validation-webhook:v4.2.1v1.20-v1.22
snapshot-validation-webhook v4.1.1release-4.1v1.0.0-registry.k8s.io/sig-storage/snapshot-validation-webhook:v4.1.0v1.20-v1.20
snapshot-validation-webhook v4.0.1release-4.0v1.0.0-registry.k8s.io/sig-storage/snapshot-validation-webhook:v4.0.1v1.20-v1.20
snapshot-validation-webhook v3.0.3release-3.0v1.0.0-registry.k8s.io/sig-storage/snapshot-validation-webhook:v3.0.3v1.17-v1.17

Description

The snapshot validating webhook is an HTTP callback which responds to admission requests. It is part of a larger plan to tighten validation for volume snapshot objects. This webhook introduces the ratcheting validation mechanism targeting the tighter validation. The cluster admin or Kubernetes distribution admin should install the webhook alongside the snapshot controllers and CRDs.

:warning: WARNING: Cluster admins choosing not to install the webhook server and participate in the phased release process can cause future problems when upgrading from v1beta1 to v1 volumesnapshot API, if there are currently persisted objects which fail the new stricter validation. Potential impacts include being unable to delete invalid snapshot objects.

Deployment

Kubernetes distributors should bundle and deploy the snapshot validation webhook along with the snapshot controller and CRDs as part of their Kubernetes cluster management process (independent of any CSI Driver).

Read more about how to install the example webhook here.

CSI Proxy

Status and Releases

Git Repository: https://github.com/kubernetes-csi/csi-proxy

Status: V1 starting with v1.0.0

StatusMin K8s VersionMax K8s Version
v0.1.01.18-
v0.2.0+1.18-
v1.0.0+1.18-

Description

CSI Proxy is a binary that exposes a set of gRPC APIs around storage operations over named pipes in Windows. A container, such as CSI node plugins, can mount the named pipes depending on operations it wants to exercise on the host and invoke the APIs.

Each named pipe will support a specific version of an API (e.g. v1alpha1, v2beta1) that targets a specific area of storage (e.g. disk, volume, file, SMB, iSCSI). For example, \\.\pipe\csi-proxy-filesystem-v1alpha1, \\.\pipe\csi-proxy-disk-v1beta1. Any release of csi-proxy.exe binary will strive to maintain backward compatibility across as many prior stable versions of an API group as possible. Please see details in this CSI Windows support KEP

Usage

Run csi-proxy.exe binary directly on a Windows node. The command line options are:

  • -kubelet-path: This is the prefix path of the kubelet directory in the host file system (the default value is set to C:\var\lib\kubelet)

  • -windows-service: Configure as a Windows Service

  • -log_file: If non-empty, use this log file. (Note: must set logtostdrr=false if setting -log_file)

Note that -kubelet-pod-path and -kubelet-csi-plugins-path were used in prior 1.0.0 versions, and they are now replaced by new parameter -kubelet-path

For detailed information (binary parameters, etc.), see the README of the relevant branch.

Deployment

It the responsibility of the Kubernetes distribution or cluster admin to install csi-proxy. Directly run csi-proxy.exe binary or run it as a Windows Service on Kubernetes nodes. For example,

    $flags = "-windows-service -log_file=\etc\kubernetes\logs\csi-proxy.log -logtostderr=false"
    sc.exe create csiproxy binPath= "${env:NODE_DIR}\csi-proxy.exe $flags"
    sc.exe failure csiproxy reset= 0 actions= restart/10000
    sc.exe start csiproxy

Kubernetes CSI Sidecar Containers

Kubernetes CSI Sidecar Containers are a set of standard containers that aim to simplify the development and deployment of CSI Drivers on Kubernetes.

These containers contain common logic to watch the Kubernetes API, trigger appropriate operations against the “CSI volume driver” container, and update the Kubernetes API as appropriate.

The containers are intended to be bundled with third-party CSI driver containers and deployed together as pods.

The containers are developed and maintained by the Kubernetes Storage community.

Use of the containers is strictly optional, but highly recommended.

Benefits of these sidecar containers include:

  • Reduction of "boilerplate" code.
    • CSI Driver developers do not have to worry about complicated, "Kubernetes specific" code.
  • Separation of concerns.
    • Code that interacts with the Kubernetes API is isolated from (and in a different container than) the code that implements the CSI interface.

The Kubernetes development team maintains the following Kubernetes CSI Sidecar Containers:

CSI external-attacher

Status and Releases

Git Repository: https://github.com/kubernetes-csi/external-attacher

Status: GA/Stable

Supported Versions

Latest stable releaseBranchMin CSI VersionMax CSI VersionContainer ImageMin K8s VersionMax K8s VersionRecommended K8s Version
external-attacher v4.4.0release-4.4v1.0.0-registry.k8s.io/sig-storage/csi-attacher:v4.4.0v1.17-v1.27
external-attacher v4.3.0release-4.3v1.0.0-registry.k8s.io/sig-storage/csi-attacher:v4.3.0v1.17-v1.22
external-attacher v4.2.0release-4.2v1.0.0-registry.k8s.io/sig-storage/csi-attacher:v4.2.0v1.17-v1.22
external-attacher v4.1.0release-4.1v1.0.0-registry.k8s.io/sig-storage/csi-attacher:v4.1.0v1.17-v1.22

Unsupported Versions

Latest stable releaseBranchMin CSI VersionMax CSI VersionContainer ImageMin K8s VersionMax K8s VersionRecommended K8s Version
external-attacher v4.0.0release-4.0v1.0.0-registry.k8s.io/sig-storage/csi-attacher:v4.0.0v1.17-v1.22
external-attacher v3.5.1release-3.5v1.0.0-registry.k8s.io/sig-storage/csi-attacher:v3.5.1v1.17-v1.22
external-attacher v3.4.0release-3.4v1.0.0-registry.k8s.io/sig-storage/csi-attacher:v3.4.0v1.17-v1.22
external-attacher v3.3.0release-3.3v1.0.0-registry.k8s.io/sig-storage/csi-attacher:v3.3.0v1.17-v1.22
external-attacher v3.2.1release-3.2v1.0.0-registry.k8s.io/sig-storage/csi-attacher:v3.2.1v1.17-v1.17
external-attacher v3.1.0release-3.1v1.0.0-registry.k8s.io/sig-storage/csi-attacher:v3.1.0v1.17-v1.17
external-attacher v3.0.2release-3.0v1.0.0-registry.k8s.io/sig-storage/csi-attacher:v3.0.2v1.17-v1.17
external-attacher v2.2.0release-2.2v1.0.0-quay.io/k8scsi/csi-attacher:v2.2.0v1.14-v1.17
external-attacher v2.1.0release-2.1v1.0.0-quay.io/k8scsi/csi-attacher:v2.1.0v1.14-v1.17
external-attacher v2.0.0release-2.0v1.0.0-quay.io/k8scsi/csi-attacher:v2.0.0v1.14-v1.15
external-attacher v1.2.1release-1.2v1.0.0-quay.io/k8scsi/csi-attacher:v1.2.1v1.13-v1.15
external-attacher v1.1.1release-1.1v1.0.0-quay.io/k8scsi/csi-attacher:v1.1.1v1.13-v1.14
external-attacher v0.4.2release-0.4v0.3.0v0.3.0quay.io/k8scsi/csi-attacher:v0.4.2v1.10v1.16v1.10

Description

The CSI external-attacher is a sidecar container that watches the Kubernetes API server for VolumeAttachment objects and triggers Controller[Publish|Unpublish]Volume operations against a CSI endpoint.

Usage

CSI drivers that require integrating with the Kubernetes volume attach/detach hooks should use this sidecar container, and advertise the CSI PUBLISH_UNPUBLISH_VOLUME controller capability.

For detailed information (binary parameters, RBAC rules, etc.), see https://github.com/kubernetes-csi/external-attacher/blob/master/README.md.

Deployment

The CSI external-attacher is deployed as a controller. See deployment section for more details.

CSI external-provisioner

Status and Releases

Git Repository: https://github.com/kubernetes-csi/external-provisioner

Status: GA/Stable

Supported Versions

Latest stable releaseBranchMin CSI VersionMax CSI VersionContainer ImageMin K8s VersionMax K8s VersionRecommended K8s Version
external-provisioner v3.6.0release-3.6v1.0.0-registry.k8s.io/sig-storage/csi-provisioner:v3.6.0v1.20-v1.27
external-provisioner v3.5.0release-3.5v1.0.0-registry.k8s.io/sig-storage/csi-provisioner:v3.5.0v1.20-v1.26
external-provisioner v3.4.1release-3.4v1.0.0-registry.k8s.io/sig-storage/csi-provisioner:v3.4.1v1.20-v1.26

Unsupported Versions

Latest stable releaseBranchMin CSI VersionMax CSI VersionContainer ImageMin K8s VersionMax K8s VersionRecommended K8s Version
external-provisioner v3.3.1release-3.3v1.0.0-registry.k8s.io/sig-storage/csi-provisioner:v3.3.1v1.20-v1.25
external-provisioner v3.2.2release-3.2v1.0.0-registry.k8s.io/sig-storage/csi-provisioner:v3.2.2v1.20-v1.22
external-provisioner v3.1.1release-3.1v1.0.0-registry.k8s.io/sig-storage/csi-provisioner:v3.1.1v1.20-v1.22
external-provisioner v3.0.0release-3.0v1.0.0-registry.k8s.io/sig-storage/csi-provisioner:v3.0.0v1.20-v1.22
external-provisioner v2.2.2release-2.2v1.0.0-registry.k8s.io/sig-storage/csi-provisioner:v2.2.2v1.17-v1.21
external-provisioner v2.1.2release-2.1v1.0.0-registry.k8s.io/sig-storage/csi-provisioner:v2.1.2v1.17-v1.19
external-provisioner v2.0.5release-2.0v1.0.0-registry.k8s.io/sig-storage/csi-provisioner:v2.0.5v1.17-v1.19
external-provisioner v1.6.1release-1.6v1.0.0-registry.k8s.io/sig-storage/csi-provisioner:v1.6.1v1.13v1.21v1.18
external-provisioner v1.5.0release-1.5v1.0.0-quay.io/k8scsi/csi-provisioner:v1.5.0v1.13v1.21v1.17
external-provisioner v1.4.0release-1.4v1.0.0-quay.io/k8scsi/csi-provisioner:v1.4.0v1.13v1.21v1.16
external-provisioner v1.3.1release-1.3v1.0.0-quay.io/k8scsi/csi-provisioner:v1.3.1v1.13v1.19v1.15
external-provisioner v1.2.0release-1.2v1.0.0-quay.io/k8scsi/csi-provisioner:v1.2.0v1.13v1.19v1.14
external-provisioner v0.4.2release-0.4v0.3.0v0.3.0quay.io/k8scsi/csi-provisioner:v0.4.2v1.10v1.16v1.10

Description

The CSI external-provisioner is a sidecar container that watches the Kubernetes API server for PersistentVolumeClaim objects.

It calls CreateVolume against the specified CSI endpoint to provision a new volume.

Volume provisioning is triggered by the creation of a new Kubernetes PersistentVolumeClaim object, if the PVC references a Kubernetes StorageClass, and the name in the provisioner field of the storage class matches the name returned by the specified CSI endpoint in the GetPluginInfo call.

Once a new volume is successfully provisioned, the sidecar container creates a Kubernetes PersistentVolume object to represent the volume.

The deletion of a PersistentVolumeClaim object bound to a PersistentVolume corresponding to this driver with a delete reclaim policy causes the sidecar container to trigger a DeleteVolume operation against the specified CSI endpoint to delete the volume. Once the volume is successfully deleted, the sidecar container also deletes the PersistentVolume object representing the volume.

DataSources

The external-provisioner provides the ability to request a volume be pre-populated from a data source during provisioning. For more information on how data sources are handled see DataSources.

Snapshot

The CSI external-provisioner supports the Snapshot DataSource. If a Snapshot CRD is specified as a data source on a PVC object, the sidecar container fetches the information about the snapshot by fetching the SnapshotContent object and populates the data source field in the resulting CreateVolume call to indicate to the storage system that the new volume should be populated using the specified snapshot.

PersistentVolumeClaim (clone)

Cloning is also implemented by specifying a kind: of type PersistentVolumeClaim in the DataSource field of a Provision request. It's the responsbility of the external-provisioner to verify that the claim specified in the DataSource object exists, is in the same storage class as the volume being provisioned and that the claim is currently Bound.

StorageClass Parameters

When provisioning a new volume, the CSI external-provisioner sets the map<string, string> parameters field in the CSI CreateVolumeRequest call to the key/values specified in the StorageClass it is handling.

The CSI external-provisioner (v1.0.1+) also reserves the parameter keys prefixed with csi.storage.k8s.io/. Any StorageClass keys prefixed with csi.storage.k8s.io/ are not passed to the CSI driver as an opaque parameter.

The following reserved StorageClass parameter keys trigger behavior in the CSI external-provisioner:

  • csi.storage.k8s.io/provisioner-secret-name
  • csi.storage.k8s.io/provisioner-secret-namespace
  • csi.storage.k8s.io/controller-publish-secret-name
  • csi.storage.k8s.io/controller-publish-secret-namespace
  • csi.storage.k8s.io/node-stage-secret-name
  • csi.storage.k8s.io/node-stage-secret-namespace
  • csi.storage.k8s.io/node-publish-secret-name
  • csi.storage.k8s.io/node-publish-secret-namespace
  • csi.storage.k8s.io/fstype

If the PVC VolumeMode is set to Filesystem, and the value of csi.storage.k8s.io/fstype is specified, it is used to populate the FsType in CreateVolumeRequest.VolumeCapabilities[x].AccessType and the AccessType is set to Mount.

For more information on how secrets are handled see Secrets & Credentials.

Example StorageClass:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: gold-example-storage
provisioner: exampledriver.example.com
parameters:
  disk-type: ssd
  csi.storage.k8s.io/fstype: ext4
  csi.storage.k8s.io/provisioner-secret-name: mysecret
  csi.storage.k8s.io/provisioner-secret-namespace: mynamespace

PersistentVolumeClaim and PersistentVolume Parameters

The CSI external-provisioner (v1.6.0+) introduces the --extra-create-metadata flag, which automatically sets the following map<string, string> parameters in the CSI CreateVolumeRequest:

  • csi.storage.k8s.io/pvc/name
  • csi.storage.k8s.io/pvc/namespace
  • csi.storage.k8s.io/pv/name

These parameters are not part of the StorageClass, but are internally generated using the name and namespace of the source PersistentVolumeClaim and PersistentVolume.

Usage

CSI drivers that support dynamic volume provisioning should use this sidecar container, and advertise the CSI CREATE_DELETE_VOLUME controller capability.

For detailed information (binary parameters, RBAC rules, etc.), see https://github.com/kubernetes-csi/external-provisioner/blob/master/README.md.

Deployment

The CSI external-provisioner is deployed as a controller. See deployment section for more details.

CSI external-resizer

Status and Releases

Git Repository: https://github.com/kubernetes-csi/external-resizer

Status: Beta starting with v0.3.0

Supported Versions

Latest stable releaseBranchMin CSI VersionMax CSI VersionContainer ImageMin K8s VersionMax K8s VersionRecommended K8s Version
external-resizer v1.9.0release-1.9v1.5.0-registry.k8s.io/sig-storage/csi-resizer:v1.9.0v1.16-v1.28
external-resizer v1.8.0release-1.8v1.5.0-registry.k8s.io/sig-storage/csi-resizer:v1.8.0v1.16-v1.23
external-resizer v1.7.0release-1.7v1.5.0-registry.k8s.io/sig-storage/csi-resizer:v1.7.0v1.16-v1.23

Unsupported Versions

Latest stable releaseBranchMin CSI VersionMax CSI VersionContainer ImageMin K8s VersionMax K8s VersionRecommended K8s Version
external-resizer v1.6.0release-1.6v1.5.0-registry.k8s.io/sig-storage/csi-resizer:v1.6.0v1.16-v1.23
external-resizer v1.5.0release-1.5v1.5.0-registry.k8s.io/sig-storage/csi-resizer:v1.5.0v1.16-v1.23
external-resizer v1.4.0release-1.4v1.5.0-registry.k8s.io/sig-storage/csi-resizer:v1.4.0v1.16-v1.23
external-resizer v1.3.0release-1.3v1.5.0-registry.k8s.io/sig-storage/csi-resizer:v1.3.0v1.16-v1.22
external-resizer v1.2.0release-1.2v1.2.0-registry.k8s.io/sig-storage/csi-resizer:v1.2.0v1.16-v1.21
external-resizer v1.1.0release-1.1v1.2.0-registry.k8s.io/sig-storage/csi-resizer:v1.1.0v1.16-v1.16
external-resizer v0.5.0release-0.5v1.2.0-quay.io/k8scsi/csi-resizer:v0.5.0v1.15-v1.16
external-resizer v0.2.0release-0.2v1.1.0-quay.io/k8scsi/csi-resizer:v0.2.0v1.15-v1.15
external-resizer v0.1.0release-0.1v1.1.0-quay.io/k8scsi/csi-resizer:v0.1.0v1.14v1.14v1.14
external-resizer v1.0.1release-1.0v1.2.0-quay.io/k8scsi/csi-resizer:v1.0.1v1.16-v1.16

Description

The CSI external-resizer is a sidecar container that watches the Kubernetes API server for PersistentVolumeClaim object edits and triggers ControllerExpandVolume operations against a CSI endpoint if user requested more storage on PersistentVolumeClaim object.

Usage

CSI drivers that support Kubernetes volume expansion should use this sidecar container, and advertise the CSI VolumeExpansion plugin capability.

Deployment

The CSI external-resizer is deployed as a controller. See deployment section for more details.

CSI external-snapshotter

Status and Releases

Git Repository: https://github.com/kubernetes-csi/external-snapshotter

Status: GA v4.0.0+

Supported Versions

Latest stable releaseBranchMin CSI VersionMax CSI VersionContainer ImageMin K8s VersionMax K8s VersionRecommended K8s Version
external-snapshotter v6.3.0release-6.2v1.0.0-registry.k8s.io/sig-storage/csi-snapshotter:v6.2.1v1.20-v1.24
external-snapshotter v6.2.2release-6.2v1.0.0-registry.k8s.io/sig-storage/csi-snapshotter:v6.2.1v1.20-v1.24

Unsupported Versions

Latest stable releaseBranchMin CSI VersionMax CSI VersionContainer ImageMin K8s VersionMax K8s VersionRecommended K8s Version
external-snapshotter v6.1.0release-6.1v1.0.0-registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0v1.20-v1.24
external-snapshotter v6.0.1release-6.0v1.0.0-registry.k8s.io/sig-storage/csi-snapshotter:v6.0.1v1.20-v1.24
external-snapshotter v5.0.1release-5.0v1.0.0-registry.k8s.io/sig-storage/csi-snapshotter:v5.0.1v1.20-v1.22
external-snapshotter v4.2.1release-4.2v1.0.0-registry.k8s.io/sig-storage/csi-snapshotter:v4.2.1v1.20-v1.22
external-snapshotter v4.1.1release-4.1v1.0.0-registry.k8s.io/sig-storage/csi-snapshotter:v4.1.1v1.20-v1.20
external-snapshotter v4.0.1release-4.0v1.0.0-registry.k8s.io/sig-storage/csi-snapshotter:v4.0.1v1.20-v1.20
external-snapshotter v3.0.3 (beta)release-3.0v1.0.0-registry.k8s.io/sig-storage/csi-snapshotter:v3.0.3v1.17-v1.17
external-snapshotter v2.1.4 (beta)release-2.1v1.0.0-registry.k8s.io/sig-storage/csi-snapshotter:v2.1.4v1.17-v1.17
external-snapshotter v1.2.2 (alpha)release-1.2v1.0.0-/registry.k8s.io/sig-storage/csi-snapshotter:v1.2.2v1.13v1.16v1.14
external-snapshotter v0.4.2 (alpha)release-0.4v0.3.0v0.3.0quay.io/k8scsi/csi-snapshotter:v0.4.2v1.12v1.16v1.12

To use the snapshot beta and GA feature, a snapshot controller is also required. For more information, see this snapshot-controller page.

Snapshot Beta/GA

Description

Starting with the Beta version, the snapshot controller will be watching the Kubernetes API server for VolumeSnapshot and VolumeSnapshotContent CRD objects. The CSI external-snapshotter sidecar only watches the Kubernetes API server for VolumeSnapshotContent CRD objects. The CSI external-snapshotter sidecar is also responsible for calling the CSI RPCs CreateSnapshot, DeleteSnapshot, and ListSnapshots.

VolumeSnapshotClass Parameters

When provisioning a new volume snapshot, the CSI external-snapshotter sets the map<string, string> parameters field in the CSI CreateSnapshotRequest call to the key/values specified in the VolumeSnapshotClass it is handling.

The CSI external-snapshotter also reserves the parameter keys prefixed with csi.storage.k8s.io/. Any VolumeSnapshotClass keys prefixed with csi.storage.k8s.io/ are not passed to the CSI driver as an opaque parameter.

The following reserved VolumeSnapshotClass parameter keys trigger behavior in the CSI external-snapshotter:

  • csi.storage.k8s.io/snapshotter-secret-name (v1.0.1+)
  • csi.storage.k8s.io/snapshotter-secret-namespace (v1.0.1+)
  • csi.storage.k8s.io/snapshotter-list-secret-name (v2.1.0+)
  • csi.storage.k8s.io/snapshotter-list-secret-namespace (v2.1.0+)

For more information on how secrets are handled see Secrets & Credentials.

VolumeSnapshot and VolumeSnapshotContent Parameters

The CSI external-snapshotter (v4.0.0+) introduces the --extra-create-metadata flag, which automatically sets the following map<string, string> parameters in the CSI CreateSnapshotRequest:

  • csi.storage.k8s.io/volumesnapshot/name
  • csi.storage.k8s.io/volumesnapshot/namespace
  • csi.storage.k8s.io/volumesnapshotcontent/name

These parameters are internally generated using the name and namespace of the source VolumeSnapshot and VolumeSnapshotContent.

For detailed snapshot beta design changes, see the design doc here.

For detailed information about volume snapshot and restore functionality, see Volume Snapshot & Restore.

Usage

CSI drivers that support provisioning volume snapshots and the ability to provision new volumes using those snapshots should use this sidecar container, and advertise the CSI CREATE_DELETE_SNAPSHOT controller capability.

For detailed information (binary parameters, RBAC rules, etc.), see https://github.com/kubernetes-csi/external-snapshotter/blob/release-6.2/README.md.

Deployment

The CSI external-snapshotter is deployed as a sidecar controller. See deployment section for more details.

For an example deployment, see this example which deploys external-snapshotter and external-provisioner with the Hostpath CSI driver.

Snapshot Alpha

Description

The CSI external-snapshotter is a sidecar container that watches the Kubernetes API server for VolumeSnapshot and VolumeSnapshotContent CRD objects.

The creation of a new VolumeSnapshot object referencing a SnapshotClass CRD object corresponding to this driver causes the sidecar container to trigger a CreateSnapshot operation against the specified CSI endpoint to provision a new snapshot. When a new snapshot is successfully provisioned, the sidecar container creates a Kubernetes VolumeSnapshotContent object to represent the new snapshot.

The deletion of a VolumeSnapshot object bound to a VolumeSnapshotContent corresponding to this driver with a delete deletion policy causes the sidecar container to trigger a DeleteSnapshot operation against the specified CSI endpoint to delete the snapshot. Once the snapshot is successfully deleted, the sidecar container also deletes the VolumeSnapshotContent object representing the snapshot.

Usage

CSI drivers that support provisioning volume snapshots and the ability to provision new volumes using those snapshots should use this sidecar container, and advertise the CSI CREATE_DELETE_SNAPSHOT controller capability.

For detailed information (binary parameters, RBAC rules, etc.), see https://github.com/kubernetes-csi/external-snapshotter/blob/release-1.2/README.md.

Deployment

The CSI external-snapshotter is deployed as a controller. See deployment section for more details.

For an example deployment, see this example which deploys external-snapshotter and external-provisioner with the Hostpath CSI driver.

CSI livenessprobe

Status and Releases

Git Repository: https://github.com/kubernetes-csi/livenessprobe

Status: GA/Stable

Supported Versions

Latest stable releaseBranchMin CSI VersionMax CSI VersionContainer ImageMin K8s VersionMax K8s Version
livenessprobe v2.11.0release-2.10v1.0.0-registry.k8s.io/sig-storage/livenessprobe:v2.11.0v1.13-
livenessprobe v2.10.0release-2.10v1.0.0-registry.k8s.io/sig-storage/livenessprobe:v2.10.0v1.13-
livenessprobe v2.9.0release-2.9v1.0.0-registry.k8s.io/sig-storage/livenessprobe:v2.9.0v1.13-
livenessprobe v2.8.0release-2.8v1.0.0-registry.k8s.io/sig-storage/livenessprobe:v2.8.0v1.13-

Unsupported Versions

Latest stable releaseBranchMin CSI VersionMax CSI VersionContainer ImageMin K8s VersionMax K8s Version
livenessprobe v2.7.0release-2.7v1.0.0-registry.k8s.io/sig-storage/livenessprobe:v2.7.0v1.13-
livenessprobe v2.6.0release-2.6v1.0.0-registry.k8s.io/sig-storage/livenessprobe:v2.6.0v1.13-
livenessprobe v2.5.0release-2.5v1.0.0-registry.k8s.io/sig-storage/livenessprobe:v2.5.0v1.13-
livenessprobe v2.4.0release-2.4v1.0.0-registry.k8s.io/sig-storage/livenessprobe:v2.4.0v1.13-
livenessprobe v2.3.0release-2.3v1.0.0-registry.k8s.io/sig-storage/livenessprobe:v2.3.0v1.13-
livenessprobe v2.2.0release-2.2v1.0.0-registry.k8s.io/sig-storage/livenessprobe:v2.2.0v1.13-
livenessprobe v2.1.0release-2.1v1.0.0-registry.k8s.io/sig-storage/livenessprobe:v2.1.0v1.13-
livenessprobe v2.0.0release-2.0v1.0.0-quay.io/k8scsi/livenessprobe:v2.0.0v1.13-
livenessprobe v1.1.0release-1.1v1.0.0-quay.io/k8scsi/livenessprobe:v1.1.0v1.13-
Unsupported.No 0.x branch.v0.3.0v0.3.0quay.io/k8scsi/livenessprobe:v0.4.1v1.10v1.16

Description

The CSI livenessprobe is a sidecar container that monitors the health of the CSI driver and reports it to Kubernetes via the Liveness Probe mechanism. This enables Kubernetes to automatically detect issues with the driver and restart the pod to try and fix the issue.

Usage

All CSI drivers should use the liveness probe to improve the availability of the driver while deployed on Kubernetes.

For detailed information (binary parameters, RBAC rules, etc.), see https://github.com/kubernetes-csi/livenessprobe/blob/master/README.md.

Deployment

The CSI livenessprobe is deployed as part of controller and node deployments. See deployment section for more details.

CSI node-driver-registrar

Status and Releases

Git Repository: https://github.com/kubernetes-csi/node-driver-registrar

Status: GA/Stable

Supported Versions

Latest stable releaseBranchMin CSI VersionMax CSI VersionContainer ImageMin K8s VersionMax K8s VersionRecommended K8s Version
node-driver-registrar v2.9.0release-2.8v1.0.0-registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.9.0v1.13-1.25
node-driver-registrar v2.8.0release-2.8v1.0.0-registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.8.0v1.13-
node-driver-registrar v2.7.0release-2.7v1.0.0-registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.7.0v1.13-
node-driver-registrar v2.6.3release-2.6v1.0.0-registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.3v1.13-

Unsupported Versions

Latest stable releaseBranchMin CSI VersionMax CSI VersionContainer ImageMin K8s VersionMax K8s VersionRecommended K8s Version
node-driver-registrar v2.5.1release-2.5v1.0.0-registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1v1.13-
node-driver-registrar v2.4.0release-2.4v1.0.0-registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.4.0v1.13-
node-driver-registrar v2.3.0release-2.3v1.0.0-registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.3.0v1.13-
node-driver-registrar v2.2.0release-2.2v1.0.0-registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.2.0v1.13-
node-driver-registrar v2.1.0release-2.1v1.0.0-registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.1.0v1.13-
node-driver-registrar v2.0.0release-2.0v1.0.0-registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.0.0v1.13-
node-driver-registrar v1.2.0release-1.2v1.0.0-quay.io/k8scsi/csi-node-driver-registrar:v1.2.0v1.13-
driver-registrar v0.4.2release-0.4v0.3.0v0.3.0quay.io/k8scsi/driver-registrar:v0.4.2v1.10v1.16

Description

The CSI node-driver-registrar is a sidecar container that fetches driver information (using NodeGetInfo) from a CSI endpoint and registers it with the kubelet on that node using the kubelet plugin registration mechanism.

Usage

Kubelet directly issues CSI NodeGetInfo, NodeStageVolume, and NodePublishVolume calls against CSI drivers. It uses the kubelet plugin registration mechanism to discover the unix domain socket to talk to the CSI driver. Therefore, all CSI drivers should use this sidecar container to register themselves with kubelet.

For detailed information (binary parameters, etc.), see the README of the relevant branch.

Deployment

The CSI node-driver-registrar is deployed per node. See deployment section for more details.

CSI cluster-driver-registrar

Deprecated

This sidecar container was not updated since Kubernetes 1.13. As of Kubernetes 1.16, this side car container is officially deprecated.

The purpose of this side car container was to automatically register a CSIDriver object containing information about the driver with Kubernetes. Without this side car, developers and CSI driver vendors will now have to add a CSIDriver object in their installation manifest or any tool that installs their CSI driver.

Please see CSIDriver for more information.

Status and Releases

Git Repository: https://github.com/kubernetes-csi/cluster-driver-registrar

Status: Alpha

Latest stable releaseBranchCompatible with CSI VersionContainer ImageMin k8s VersionMax k8s version
cluster-driver-registrar v1.0.1release-1.0v1.0.0quay.io/k8scsi/csi-cluster-driver-registrar:v1.0.1v1.13-
driver-registrar v0.4.2release-0.4v0.3.0quay.io/k8scsi/driver-registrar:v0.4.2v1.10-

Description

The CSI cluster-driver-registrar is a sidecar container that registers a CSI Driver with a Kubernetes cluster by creating a CSIDriver Object which enables the driver to customize how Kubernetes interacts with it.

Usage

CSI drivers that use one of the following Kubernetes features should use this sidecar container:

  • Skip Attach
    • For drivers that don't support ControllerPublishVolume, this indicates to Kubernetes to skip the attach operation and eliminates the need to deploy the external-attacher sidecar.
  • Pod Info on Mount
    • This causes Kubernetes to pass metadata such as Pod name and namespace to the NodePublishVolume call.

If you are not using one of these features, this sidecar container (and the creation of the CSIDriver Object) is not required. However, it is still recommended, because the CSIDriver Object makes it easier for users to easily discover the CSI drivers installed on their clusters.

For detailed information (binary parameters, etc.), see the README of the relevant branch.

Deployment

The CSI cluster-driver-registrar is deployed as a controller. See deployment section for more details.

CSI external-health-monitor-controller

Status and Releases

Git Repository: https://github.com/kubernetes-csi/external-health-monitor

Status: Alpha

Supported Versions

Latest stable releaseBranchMin CSI VersionMax CSI VersionContainer Image
external-health-monitor-controller v0.10.0release-0.8v1.3.0-registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.10.0
external-health-monitor-controller v0.9.0release-0.8v1.3.0-registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.9.0
external-health-monitor-controller v0.8.0release-0.8v1.3.0-registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.8.0

Unsupported Versions

Latest stable releaseBranchMin CSI VersionMax CSI VersionContainer Image
external-health-monitor-controller v0.7.0release-0.7v1.3.0-registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
external-health-monitor-controller v0.6.0release-0.6v1.3.0-registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.6.0
external-health-monitor-controller v0.4.0release-0.4v1.3.0-registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.4.0
external-health-monitor-controller v0.3.0release-0.3v1.3.0-registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.3.0
external-health-monitor-controller v0.2.0release-0.2v1.3.0-registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.2.0

Description

The CSI external-health-monitor-controller is a sidecar container that is deployed together with the CSI controller driver, similar to how the CSI external-provisioner sidecar is deployed. It calls the CSI controller RPC ListVolumes or ControllerGetVolume to check the health condition of the CSI volumes and report events on PersistentVolumeClaim if the condition of a volume is abnormal.

The CSI external-health-monitor-controller also watches for node failure events. This component can be enabled by setting the enable-node-watcher flag to true. This will only have effects on local PVs now. When a node failure event is detected, an event will be reported on the PVC to indicate that pods using this PVC are on a failed node.

Usage

CSI drivers that support VOLUME_CONDITION and LIST_VOLUMES or VOLUME_CONDITION and GET_VOLUME controller capabilities should use this sidecar container.

For detailed information (binary parameters, RBAC rules, etc.), see https://github.com/kubernetes-csi/external-health-monitor/blob/master/README.md.

Deployment

The CSI external-health-monitor-controller is deployed as a controller. See https://github.com/kubernetes-csi/external-health-monitor/blob/master/README.md for more details.

CSI external-health-monitor-agent

Status and Releases

Git Repository: https://github.com/kubernetes-csi/external-health-monitor

Status: Deprecated

Unsupported Versions

Latest stable releaseBranchMin CSI VersionMax CSI VersionContainer Image
external-health-monitor-agent v0.2.0release-0.2v1.3.0-registry.k8s.io/sig-storage/csi-external-health-monitor-agent:v0.2.0

Description

Note: This sidecar has been deprecated and replaced with the CSIVolumeHealth feature in Kubernetes.

The CSI external-health-monitor-agent is a sidecar container that is deployed together with the CSI node driver, similar to how the CSI node-driver-registrar sidecar is deployed. It calls the CSI node RPC NodeGetVolumeStats to check the health condition of the CSI volumes and report events on Pod if the condition of a volume is abnormal.

Usage

CSI drivers that support VOLUME_CONDITION and NODE_GET_VOLUME_STATS node capabilities should use this sidecar container.

For detailed information (binary parameters, RBAC rules, etc.), see https://github.com/kubernetes-csi/external-health-monitor/blob/master/README.md.

Deployment

The CSI external-health-monitor-agent is deployed as a DaemonSet. See https://github.com/kubernetes-csi/external-health-monitor/blob/master/README.md for more details.

CSI objects

The Kubernetes API contains the following CSI specific objects:

The schema definition for the objects can be found in the Kubernetes API reference

CSIDriver Object

Status

  • Kubernetes 1.12 - 1.13: Alpha
  • Kubernetes 1.14: Beta
  • Kubernetes 1.18: GA

What is the CSIDriver object?

The CSIDriver Kubernetes API object serves two purposes:

  1. Simplify driver discovery
  • If a CSI driver creates a CSIDriver object, Kubernetes users can easily discover the CSI Drivers installed on their cluster (simply by issuing kubectl get CSIDriver)
  1. Customizing Kubernetes behavior
  • Kubernetes has a default set of behaviors when dealing with CSI Drivers (for example, it calls the Attach/Detach operations by default). This object allows CSI drivers to specify how Kubernetes should interact with it.

What fields does the CSIDriver object have?

Here is an example of a v1 CSIDriver object:

apiVersion: storage.k8s.io/v1
kind: CSIDriver
metadata:
  name: mycsidriver.example.com
spec:
  attachRequired: true
  podInfoOnMount: true
  fsGroupPolicy: File # added in Kubernetes 1.19, this field is GA as of Kubernetes 1.23
  volumeLifecycleModes: # added in Kubernetes 1.16, this field is beta
    - Persistent
    - Ephemeral
  tokenRequests: # added in Kubernetes 1.20. See status at https://kubernetes-csi.github.io/docs/token-requests.html#status
    - audience: "gcp"
    - audience: "" # empty string means defaulting to the `--api-audiences` of kube-apiserver
      expirationSeconds: 3600
  requiresRepublish: true # added in Kubernetes 1.20. See status at https://kubernetes-csi.github.io/docs/token-requests.html#status
  seLinuxMount: true # Added in Kubernetest 1.25.

These are the important fields:

  • name
    • This should correspond to the full name of the CSI driver.
  • attachRequired
    • Indicates this CSI volume driver requires an attach operation (because it implements the CSI ControllerPublishVolume method), and that Kubernetes should call attach and wait for any attach operation to complete before proceeding to mounting.
    • If a CSIDriver object does not exist for a given CSI Driver, the default is true -- meaning attach will be called.
    • If a CSIDriver object exists for a given CSI Driver, but this field is not specified, it also defaults to true -- meaning attach will be called.
    • For more information see Skip Attach.
  • podInfoOnMount
    • Indicates this CSI volume driver requires additional pod information (like pod name, pod UID, etc.) during mount operations.
    • If value is not specified or false, pod information will not be passed on mount.
    • If value is set to true, Kubelet will pass pod information as volume_context in CSI NodePublishVolume calls:
      • "csi.storage.k8s.io/pod.name": pod.Name
      • "csi.storage.k8s.io/pod.namespace": pod.Namespace
      • "csi.storage.k8s.io/pod.uid": string(pod.UID)
      • "csi.storage.k8s.io/serviceAccount.name": pod.Spec.ServiceAccountName
    • For more information see Pod Info on Mount.
  • fsGroupPolicy
    • This field was added in Kubernetes 1.19 and cannot be set when using an older Kubernetes release.
    • This field is beta in Kubernetes 1.20 and GA in Kubernetes 1.23.
    • Controls if this CSI volume driver supports volume ownership and permission changes when volumes are mounted.
    • The following modes are supported, and if not specified the default is ReadWriteOnceWithFSType:
      • None: Indicates that volumes will be mounted with no modifications, as the CSI volume driver does not support these operations.
      • File: Indicates that the CSI volume driver supports volume ownership and permission change via fsGroup, and Kubernetes may use fsGroup to change permissions and ownership of the volume to match user requested fsGroup in the pod's SecurityPolicy regardless of fstype or access mode.
      • ReadWriteOnceWithFSType: Indicates that volumes will be examined to determine if volume ownership and permissions should be modified to match the pod's security policy. Changes will only occur if the fsType is defined and the persistent volume's accessModes contains ReadWriteOnce. This is the default behavior if no other FSGroupPolicy is defined.
    • For more information see CSI Driver fsGroup Support.
  • volumeLifecycleModes
    • This field was added in Kubernetes 1.16 and cannot be set when using an older Kubernetes release.
    • This field is beta.
    • It informs Kubernetes about the volume modes that are supported by the driver. This ensures that the driver is not used incorrectly by users. The default is Persistent, which is the normal PVC/PV mechanism. Ephemeral enables inline ephemeral volumes in addition (when both are listed) or instead of normal volumes (when it is the only entry in the list).
  • tokenRequests
    • This field was added in Kubernetes 1.20 and cannot be set when using an older Kubernetes release.
    • This field is enabled by default in Kubernetes 1.21 and cannot be disabled since 1.22.
    • If this field is specified, Kubelet will plumb down the bound service account tokens of the pod as volume_context in the NodePublishVolume:
      • "csi.storage.k8s.io/serviceAccount.tokens": {"gcp":{"token":"<token>","expirationTimestamp":"<expiration timestamp in RFC3339>"}}
      • If CSI driver doesn't find token recorded in the volume_context, it should return error in NodePublishVolume to inform Kubelet to retry.
      • Audiences should be distinct, otherwise the validation will fail. If the audience is "", it means the issued token has the same audience as kube-apiserver.
  • requiresRepublish
    • This field was added in Kubernetes 1.20 and cannot be set when using an older Kubernetes release.
    • This field is enabled by default in Kubernetes 1.21 and cannot be disabled since 1.22.
    • If this field is true, Kubelet will periodically call NodePublishVolume. This is useful in the following scenarios:
      • If the volume mounted by CSI driver is short-lived.
      • If CSI driver requires valid service account tokens (enabled by the field tokenRequests) repeatedly.
    • CSI drivers should only atomically update the contents of the volume. Mount point change will not be seen by a running container.
  • seLinuxMount
    • This field is alpha in Kubernetes 1.25. It must be explicitly enabled by setting feature gates ReadWriteOncePod and SELinuxMountReadWriteOncePod.
    • The default value of this field is false.
    • When set to true, corresponding CSI driver announces that all its volumes are independent volumes from Linux kernel point of view and each of them can be mounted with a different SELinux label mount option (-o context=<SELinux label>). Examples:
      • A CSI driver that creates block devices formatted with a filesystem, such as xfs or ext4, can set seLinuxMount: true, because each volume has its own block device.
      • A CSI driver whose volumes are always separate exports on a NFS server can set seLinuxMount: true, because each volume has its own NFS export and thus Linux kernel treats them as independent volumes.
      • A CSI driver that can provide two volumes as subdirectories of a common NFS export must set seLinuxMount: false, because these two volumes are treated as a single volume by Linux kernel and must share the same -o context=<SELinux label> option.
    • See corresponding KEP for details.
    • Always test Pods with various SELinux contexts with various volume configurations before setting this field to true!

What creates the CSIDriver object?

To install, a CSI driver's deployment manifest must contain a CSIDriver object as shown in the example above.

NOTE: The cluster-driver-registrar side-car which was used to create CSIDriver objects in Kubernetes 1.13 has been deprecated for Kubernetes 1.16. No cluster-driver-registrar has been released for Kubernetes 1.14 and later.

CSIDriver instance should exist for whole lifetime of all pods that use volumes provided by corresponding CSI driver, so Skip Attach and Pod Info on Mount features work correctly.

Listing registered CSI drivers

Using the CSIDriver object, it is now possible to query Kubernetes to get a list of registered drivers running in the cluster as shown below:

$> kubectl get csidrivers.storage.k8s.io
NAME                      ATTACHREQUIRED   PODINFOONMOUNT   MODES                  AGE
mycsidriver.example.com   true             true             Persistent,Ephemeral   2m46s

Or get a more detailed view of your registered driver with:

$> kubectl describe csidrivers.storage.k8s.io
Name:         mycsidriver.example.com
Namespace:    
Labels:       <none>
Annotations:  <none>
API Version:  storage.k8s.io/v1
Kind:         CSIDriver
Metadata:
  Creation Timestamp:  2022-04-07T05:58:06Z
  Managed Fields:
    API Version:  storage.k8s.io/v1
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .:
          f:kubectl.kubernetes.io/last-applied-configuration:
      f:spec:
        f:attachRequired:
        f:fsGroupPolicy:
        f:podInfoOnMount:
        f:requiresRepublish:
        f:tokenRequests:
        f:volumeLifecycleModes:
          .:
          v:"Ephemeral":
          v:"Persistent":
    Manager:         kubectl-client-side-apply
    Operation:       Update
    Time:            2022-04-07T05:58:06Z
  Resource Version:  896
  UID:               6cc7d513-6d72-4203-87d3-730f83884f89
Spec:
  Attach Required:    true
  Fs Group Policy:    File
  Pod Info On Mount:  true
  Volume Lifecycle Modes:
    Persistent
    Ephemeral
Events:  <none>

Changes from Alpha to Beta

CRD to Built in Type

During alpha development, the CSIDriver object was also defined as a Custom Resource Definition (CRD). As part of the promotion to beta the object has been moved to the built-in Kubernetes API.

In the move from alpha to beta, the API Group for this object changed from csi.storage.k8s.io/v1alpha1 to storage.k8s.io/v1beta1.

There is no automatic update of existing CRDs and their CRs during Kubernetes update to the new build-in type.

Enabling CSIDriver on Kubernetes

In Kubernetes v1.12 and v1.13, because the feature was alpha, it was disabled by default. To enable the use of CSIDriver on these versions, do the following:

  1. Ensure the feature gate is enabled via the following Kubernetes feature flag: --feature-gates=CSIDriverRegistry=true
  2. Either ensure the CSIDriver CRD is automatically installed via the Kubernetes Storage CRD addon OR manually install the CSIDriver CRD on the Kubernetes cluster with the following command:
$> kubectl create -f https://raw.githubusercontent.com/kubernetes/csi-api/master/pkg/crd/manifests/csidriver.yaml

Kubernetes v1.14+, uses the same Kubernetes feature flag, but because the feature is beta, it is enabled by default. And since the API type (as of beta) is built in to the Kubernetes API, installation of the CRD is no longer required.

CSINode Object

Status

StatusMin K8s VersionMax K8s Version
Alpha1.121.13
Beta1.141.16
GA1.17-

What is the CSINode object?

CSI drivers generate node specific information. Instead of storing this in the Kubernetes Node API Object, a new CSI specific Kubernetes CSINode object was created.

It serves the following purposes:

  1. Mapping Kubernetes node name to CSI Node name,
  • The CSI GetNodeInfo call returns the name by which the storage system refers to a node. Kubernetes must use this name in future ControllerPublishVolume calls. Therefore, when a new CSI driver is registered, Kubernetes stores the storage system node ID in the CSINode object for future reference.
  1. Driver availability
  • A way for kubelet to communicate to the kube-controller-manager and kubernetes scheduler whether the driver is available (registered) on the node or not.
  1. Volume topology
  • The CSI GetNodeInfo call returns a set of keys/values labels identifying the topology of that node. Kubernetes uses this information to do topology-aware provisioning (see PVC Volume Binding Modes for more details). It stores the key/values as labels on the Kubernetes node object. In order to recall which Node label keys belong to a specific CSI driver, the kubelet stores the keys in the CSINode object for future reference.

What fields does the CSINode object have?

Here is an example of a v1 CSINode object:

apiVersion: storage.k8s.io/v1
kind: CSINode
metadata:
  name: node1
spec:
  drivers:
  - name: mycsidriver.example.com
    nodeID: storageNodeID1
    topologyKeys: ['mycsidriver.example.com/regions', "mycsidriver.example.com/zones"]

What the fields mean:

  • drivers - list of CSI drivers running on the node and their properties.
  • name - the CSI driver that this object refers to.
  • nodeID - the assigned identifier for the node as determined by the driver.
  • topologyKeys - A list of topology keys assigned to the node as supported by the driver.

What creates the CSINode object?

CSI drivers do not need to create the CSINode object directly. Kubelet manages the object when a CSI driver registers through the kubelet plugin registration mechanism. The node-driver-registrar sidecar container helps with this registration.

Changes from Alpha to Beta

CRD to Built in Type

The alpha object was called CSINodeInfo, whereas the beta object is called CSINode. The alpha CSINodeInfo object was also defined as a Custom Resource Definition (CRD). As part of the promotion to beta the object has been moved to the built-in Kubernetes API.

In the move from alpha to beta, the API Group for this object changed from csi.storage.k8s.io/v1alpha1 to storage.k8s.io/v1beta1.

There is no automatic update of existing CRDs and their CRs during Kubernetes update to the new build-in type.

Enabling CSINodeInfo on Kubernetes

In Kubernetes v1.12 and v1.13, because the feature was alpha, it was disabled by default. To enable the use of CSINodeInfo on these versions, do the following:

  1. Ensure the feature gate is enabled with --feature-gates=CSINodeInfo=true
  2. Either ensure the CSIDriver CRD is automatically installed via the Kubernetes Storage CRD addon OR manually install the CSINodeInfo CRD on the Kubernetes cluster with the following command:
$> kubectl create -f https://raw.githubusercontent.com/kubernetes/csi-api/master/pkg/crd/manifests/csinodeinfo.yaml

Kubernetes v1.14+, uses the same Kubernetes feature flag, but because the feature is beta, it is enabled by default. And since the API type (as of beta) is built in to the Kubernetes API, installation of the CRD is no longer required.

Features

The Kubernetes implementation of CSI has multiple sub-features. This section describes these sub-features, their status (although support for CSI in Kubernetes is GA/stable, support of sub-features moves independently so sub-features maybe alpha or beta), and how to integrate them in to your CSI Driver.

Secrets and Credentials

Some drivers may require a secret in order to complete operations.

CSI Driver Secrets

If a CSI Driver requires secrets for a backend (a service account, for example), and this secret is required at the "per driver" granularity (not different "per CSI operation" or "per volume"), then the secret SHOULD be injected directly in to CSI driver pods via standard Kubernetes secret distribution mechanisms during deployment.

CSI Operation Secrets

If a CSI Driver requires secrets "per CSI operation" or "per volume" or "per storage pool", the CSI spec allows secrets to be passed in for various CSI operations (including CreateVolumeRequest, ControllerPublishVolumeRequest, and more).

Cluster admins can populate such secrets by creating Kubernetes Secret objects and specifying the keys in the StorageClass or SnapshotClass objects.

The CSI sidecar containers facilitate the handling of secrets between Kubernetes and the CSI Driver. For more details see:

Secret RBAC Rules

For reducing RBAC permissions as much as possible, secret rules are disabled in each sidecar repository by default.

Please add or update RBAC rules if secret is expected to use.

To set proper secret permission, uncomment related lines defined in rbac.yaml (e.g. external-provisioner/deploy/kubernetes/rbac.yaml)

Handling Sensitive Information

CSI Drivers that accept secrets SHOULD handle this data carefully. It may contain sensitive information and MUST be treated as such (e.g. not logged).

To make it easier to handle secret fields (e.g. strip them from CSI protos when logging), the CSI spec defines a decorator (csi_secret) on all fields containing sensitive information. Any fields decorated with csi_secret MUST be treated as if they contain sensitive information (e.g. not logged, etc.).

The Kubernetes CSI development team also provides a GO lang package called protosanitizer that CSI driver developers may be used to remove values for all fields in a gRPC messages decorated with csi_secret. The library can be found in kubernetes-csi/csi-lib-utils/protosanitizer. The Kubernetes CSI Sidecar Containers and sample drivers use this library to ensure no sensitive information is logged.

StorageClass Secrets

The CSI external-provisioner sidecar container facilitates the handling of secrets for the following operations:

  • CreateVolumeRequest
  • DeleteVolumeRequest
  • ControllerPublishVolumeRequest
  • ControllerUnpublishVolumeRequest
  • ControllerExpandVolumeRequest
  • NodeStageVolumeRequest
  • NodePublishVolumeRequest

CSI external-provisioner v1.0.1+ supports the following keys in StorageClass.parameters:

  • csi.storage.k8s.io/provisioner-secret-name
  • csi.storage.k8s.io/provisioner-secret-namespace
  • csi.storage.k8s.io/controller-publish-secret-name
  • csi.storage.k8s.io/controller-publish-secret-namespace
  • csi.storage.k8s.io/node-stage-secret-name
  • csi.storage.k8s.io/node-stage-secret-namespace
  • csi.storage.k8s.io/node-publish-secret-name
  • csi.storage.k8s.io/node-publish-secret-namespace

CSI external-provisioner v1.2.0+ adds support for the following keys in StorageClass.parameters:

  • csi.storage.k8s.io/controller-expand-secret-name
  • csi.storage.k8s.io/controller-expand-secret-namespace

Cluster admins can populate the secret fields for the operations listed above with data from Kubernetes Secret objects by specifying these keys in the StorageClass object.

Examples

Basic Provisioning Secret

In this example, the external-provisioner will fetch Kubernetes Secret object fast-storage-provision-key in the namespace pd-ssd-credentials and pass the credentials to the CSI driver named csi-driver.team.example.com in the CreateVolume CSI call.

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: fast-storage
provisioner: csi-driver.team.example.com
parameters:
  type: pd-ssd
  csi.storage.k8s.io/provisioner-secret-name: fast-storage-provision-key
  csi.storage.k8s.io/provisioner-secret-namespace: pd-ssd-credentials

All volumes provisioned using this StorageClass use the same secret.

Per Volume Secrets

In this example, the external-provisioner will generate the name of the Kubernetes Secret object and namespace for the NodePublishVolume CSI call, based on the PVC namespace and annotations, at volume provision time.

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: fast-storage
provisioner: csi-driver.team.example.com
parameters:
  type: pd-ssd
  csi.storage.k8s.io/node-publish-secret-name: ${pvc.annotations['team.example.com/key']}
  csi.storage.k8s.io/node-publish-secret-namespace: ${pvc.namespace}

This StorageClass will result in the creation of a PersistentVolume API object referencing a "node publish secret" in the same namespace as the PersistentVolumeClaim that triggered the provisioning and with a name specified as an annotation on the PersistentVolumeClaim. This could be used to give the creator of the PersistentVolumeClaim the ability to specify a secret containing a decryption key they have control over.

Multiple Operation Secrets

A drivers may support secret keys for multiple operations. In this case, you can provide secrets references for each operation:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: fast-storage-all
provisioner: csi-driver.team.example.com
parameters:
  type: pd-ssd
  csi.storage.k8s.io/provisioner-secret-name: ${pvc.name}
  csi.storage.k8s.io/provisioner-secret-namespace: ${pvc.namespace}-fast-storage
  csi.storage.k8s.io/node-publish-secret-name: ${pvc.name}-${pvc.annotations['team.example.com/key']}
  csi.storage.k8s.io/node-publish-secret-namespace: ${pvc.namespace}-fast-storage
  

Operations

Details for each secret supported by the external-provisioner can be found below.

Create/Delete Volume Secret

The CSI external-provisioner (v1.0.1+) looks for the following keys in StorageClass.parameters.

  • csi.storage.k8s.io/provisioner-secret-name
  • csi.storage.k8s.io/provisioner-secret-namespace

The values of both of these parameters, together, refer to the name and namespace of a Secret object in the Kubernetes API.

If specified, the CSI external-provisioner will attempt to fetch the secret before provisioning and deletion.

If the secret is retrieved successfully, the provisioner passes it to the CSI driver in the CreateVolumeRequest.secrets or DeleteVolumeRequest.secrets field.

If no such secret exists in the Kubernetes API, or the provisioner is unable to fetch it, the provision operation will fail.

Note, however, that the delete operation will continue even if the secret is not found (because, for example, the entire namespace containing the secret was deleted). In this case, if the driver requires a secret for deletion, then the volume and PV may need to be manually cleaned up.

The values of these parameters may be "templates". The external-provisioner will automatically resolve templates at volume provision time, as detailed below:

  • csi.storage.k8s.io/provisioner-secret-name
    • ${pv.name}
      • Replaced with name of the PersistentVolume object being provisioned.
    • ${pvc.namespace}
      • Replaced with namespace of the PersistentVolumeClaim object that triggered provisioning.
      • Support added in CSI external-provisioner v1.2.0+
    • ${pvc.name}
      • Replaced with the name of the PersistentVolumeClaim object that triggered provisioning.
      • Support added in CSI external-provisioner v1.2.0+
  • csi.storage.k8s.io/provisioner-secret-namespace
    • ${pv.name}
      • Replaced with name of the PersistentVolume object being provisioned.
    • ${pvc.namespace}
      • Replaced with namespace of the PersistentVolumeClaim object that triggered provisioning.

Controller Publish/Unpublish Secret

The CSI external-provisioner (v1.0.1+) looks for the following keys in StorageClass.parameters:

  • csi.storage.k8s.io/controller-publish-secret-name
  • csi.storage.k8s.io/controller-publish-secret-namespace

The values of both of these parameters, together, refer to the name and namespace of a Secret object in the Kubernetes API.

If specified, the CSI external-provisioner sets the CSIPersistentVolumeSource.ControllerPublishSecretRef field in the new PersistentVolume object to refer to this secret once provisioning is successful.

The CSI external-attacher then attempts to fetch the secret referenced by the CSIPersistentVolumeSource.ControllerPublishSecretRef, if specified, before an attach or detach operation.

If no such secret exists in the Kubernetes API, or the external-attacher is unable to fetch it, the attach or detach operation fails.

If the secret is retrieved successfully, the external-attacher passes it to the CSI driver in the ControllerPublishVolumeRequest.secrets or ControllerUnpublishVolumeRequest.secrets field.

The values of these parameters may be "templates". The external-provisioner will automatically resolve templates at volume provision time, as detailed below:

  • csi.storage.k8s.io/controller-publish-secret-name
    • ${pv.name}
      • Replaced with name of the PersistentVolume object being provisioned.
    • ${pvc.namespace}
      • Replaced with namespace of the PersistentVolumeClaim object that triggered provisioning.
    • ${pvc.name}
      • Replaced with the name of the PersistentVolumeClaim object that triggered provisioning.
    • ${pvc.annotations['<ANNOTATION_KEY>']} (e.g. ${pvc.annotations['example.com/key']})
      • Replaced with the value of the specified annotation from the PersistentVolumeClaim object that triggered provisioning
  • csi.storage.k8s.io/controller-publish-secret-namespace
    • ${pv.name}
      • Replaced with name of the PersistentVolume object being provisioned.
    • ${pvc.namespace}
      • Replaced with namespace of the PersistentVolumeClaim object that triggered provisioning.

Node Stage Secret

The CSI external-provisioner (v1.0.1+) looks for the following keys in StorageClass.parameters:

  • csi.storage.k8s.io/node-stage-secret-name
  • csi.storage.k8s.io/node-stage-secret-namespace

The value of both parameters, together, refer to the name and namespace of the Secret object in the Kubernetes API.

If specified, the CSI external-provisioner sets the CSIPersistentVolumeSource.NodeStageSecretRef field in the new PersistentVolume object to refer to this secret once provisioning is successful.

The Kubernetes kubelet then attempts to fetch the secret referenced by the CSIPersistentVolumeSource.NodeStageSecretRef field, if specified, before a mount device operation.

If no such secret exists in the Kubernetes API, or the kubelet is unable to fetch it, the mount device operation fails.

If the secret is retrieved successfully, the kubelet passes it to the CSI driver in the NodeStageVolumeRequest.secrets field.

The values of these parameters may be "templates". The external-provisioner will automatically resolve templates at volume provision time, as detailed below:

  • csi.storage.k8s.io/node-stage-secret-name
    • ${pv.name}
      • Replaced with name of the PersistentVolume object being provisioned.
    • ${pvc.namespace}
      • Replaced with namespace of the PersistentVolumeClaim object that triggered provisioning.
    • ${pvc.name}
      • Replaced with the name of the PersistentVolumeClaim object that triggered provisioning.
    • ${pvc.annotations['<ANNOTATION_KEY>']} (e.g. ${pvc.annotations['example.com/key']})
      • Replaced with the value of the specified annotation from the PersistentVolumeClaim object that triggered provisioning
  • csi.storage.k8s.io/node-stage-secret-namespace
    • ${pv.name}
      • Replaced with name of the PersistentVolume object being provisioned.
    • ${pvc.namespace}
      • Replaced with namespace of the PersistentVolumeClaim object that triggered provisioning.

Node Publish Secret

The CSI external-provisioner (v1.0.1+) looks for the following keys in StorageClass.parameters:

  • csi.storage.k8s.io/node-publish-secret-name
  • csi.storage.k8s.io/node-publish-secret-namespace

The value of both parameters, together, refer to the name and namespace of the Secret object in the Kubernetes API.

If specified, the CSI external-provisioner sets the CSIPersistentVolumeSource.NodePublishSecretRef field in the new PersistentVolume object to refer to this secret once provisioning is successful.

The Kubernetes kubelet, attempts to fetch the secret referenced by the CSIPersistentVolumeSource.NodePublishSecretRef field, if specified, before a mount operation.

If no such secret exists in the Kubernetes API, or the kubelet is unable to fetch it, the mount operation fails.

If the secret is retrieved successfully, the kubelet passes it to the CSI driver in the NodePublishVolumeRequest.secrets field.

The values of these parameters may be "templates". The external-provisioner will automatically resolve templates at volume provision time, as detailed below:

  • csi.storage.k8s.io/node-publish-secret-name
    • ${pv.name}
      • Replaced with name of the PersistentVolume object being provisioned.
    • ${pvc.namespace}
      • Replaced with namespace of the PersistentVolumeClaim object that triggered provisioning.
    • ${pvc.name}
      • Replaced with the name of the PersistentVolumeClaim object that triggered provisioning.
    • ${pvc.annotations['<ANNOTATION_KEY>']} (e.g. ${pvc.annotations['example.com/key']})
      • Replaced with the value of the specified annotation from the PersistentVolumeClaim object that triggered provisioning
  • csi.storage.k8s.io/node-publish-secret-namespace
    • ${pv.name}
      • Replaced with name of the PersistentVolume object being provisioned.
    • ${pvc.namespace}
      • Replaced with namespace of the PersistentVolumeClaim object that triggered provisioning.

Controller Expand (Volume Resize) Secret

The CSI external-provisioner (v1.2.0+) looks for the following keys in StorageClass.parameters:

  • csi.storage.k8s.io/controller-expand-secret-name
  • csi.storage.k8s.io/controller-expand-secret-namespace

The value of both parameters, together, refer to the name and namespace of the Secret object in the Kubernetes API.

If specified, the CSI external-provisioner sets the CSIPersistentVolumeSource.ControllerExpandSecretRef field in the new PersistentVolume object to refer to this secret once provisioning is successful.

The external-resizer (v0.2.0+), attempts to fetch the secret referenced by the CSIPersistentVolumeSource.ControllerExpandSecretRef field, if specified, before starting a volume resize (expand) operation.

If no such secret exists in the Kubernetes API, or the external-resizer is unable to fetch it, the resize (expand) operation fails.

If the secret is retrieved successfully, the external-resizer passes it to the CSI driver in the ControllerExpandVolumeRequest.secrets field.

The values of these parameters may be "templates". The external-provisioner will automatically resolve templates at volume provision time, as detailed below:

  • csi.storage.k8s.io/controller-expand-secret-name
    • ${pv.name}
      • Replaced with name of the PersistentVolume object being provisioned.
    • ${pvc.namespace}
      • Replaced with namespace of the PersistentVolumeClaim object that triggered provisioning.
    • ${pvc.name}
      • Replaced with the name of the PersistentVolumeClaim object that triggered provisioning.
    • ${pvc.annotations['<ANNOTATION_KEY>']} (e.g. ${pvc.annotations['example.com/key']})
      • Replaced with the value of the specified annotation from the PersistentVolumeClaim object that triggered provisioning
  • csi.storage.k8s.io/controller-expand-secret-namespace
    • ${pv.name}
      • Replaced with name of the PersistentVolume object being provisioned.
    • ${pvc.namespace}
      • Replaced with namespace of the PersistentVolumeClaim object that triggered provisioning.

VolumeSnapshotClass Secrets

The CSI external-snapshotter sidecar container facilitates the handling of secrets for the following operations:

  • CreateSnapshotRequest
  • DeleteSnapshotRequest

CSI external-snapshotter v1.0.1+ supports the following keys in VolumeSnapshotClass.parameters:

  • csi.storage.k8s.io/snapshotter-secret-name
  • csi.storage.k8s.io/snapshotter-secret-namespace

Cluster admins can populate the secret fields for the operations listed above with data from Kubernetes Secret objects by specifying these keys in the VolumeSnapshotClass object.

Operations

Details for each secret supported by the external-snapshotter can be found below.

Create/Delete VolumeSnapshot Secret

CSI external-snapshotter v1.0.1+ looks for the following keys in VolumeSnapshotClass.parameters:

  • csi.storage.k8s.io/snapshotter-secret-name
  • csi.storage.k8s.io/snapshotter-secret-namespace

The values of both of these parameters, together, refer to the name and namespace of a Secret object in the Kubernetes API.

If specified, the CSI external-snapshotter will attempt to fetch the secret before creation and deletion.

If the secret is retrieved successfully, the snapshotter passes it to the CSI driver in the CreateSnapshotRequest.secrets or DeleteSnapshotRequest.secrets field.

If no such secret exists in the Kubernetes API, or the snapshotter is unable to fetch it, the create operation will fail.

Note, however, that the delete operation will continue even if the secret is not found (because, for example, the entire namespace containing the secret was deleted). In this case, if the driver requires a secret for deletion, then the volume and PV may need to be manually cleaned up.

The values of these parameters may be "templates". The external-snapshotter will automatically resolve templates at snapshot create time, as detailed below:

  • csi.storage.k8s.io/snapshotter-secret-name
    • ${volumesnapshotcontent.name}
      • Replaced with name of the VolumeSnapshotContent object being created.
    • ${volumesnapshot.namespace}
      • Replaced with namespace of the VolumeSnapshot object that triggered creation.
    • ${volumesnapshot.name}
      • Replaced with the name of the VolumeSnapshot object that triggered creation.
  • csi.storage.k8s.io/snapshotter-secret-namespace
    • ${volumesnapshotcontent.name}
      • Replaced with name of the VolumeSnapshotContent object being created.
    • ${volumesnapshot.namespace}
      • Replaced with namespace of the VolumeSnapshot object that triggered creation.

CSI Topology Feature

Status

StatusMin K8s VersionMax K8s Versionexternal-provisioner Version
Alpha1.121.120.4
Alpha1.131.131.0
Beta1.141.161.1-1.4
GA1.17-1.5+

Overview

Some storage systems expose volumes that are not equally accessible by all nodes in a Kubernetes cluster. Instead volumes may be constrained to some subset of node(s) in the cluster. The cluster may be segmented into, for example, “racks” or “regions” and “zones” or some other grouping, and a given volume may be accessible only from one of those groups.

To enable orchestration systems, like Kubernetes, to work well with storage systems which expose volumes that are not equally accessible by all nodes, the CSI spec enables:

  1. Ability for a CSI Driver to opaquely specify where a particular node exists (e.g. "node A" is in "zone 1").
  2. Ability for Kubernetes (users or components) to influence where a volume is provisioned (e.g. provision new volume in either "zone 1" or "zone 2").
  3. Ability for a CSI Driver to opaquely specify where a particular volume exists (e.g. "volume X" is accessible by all nodes in "zone 1" and "zone 2").

Kubernetes and the external-provisioner use these abilities to make intelligent scheduling and provisioning decisions (that Kubernetes can both influence and act on topology information for each volume),

Implementing Topology in your CSI Driver

To support topology in a CSI driver, the following must be implemented:

  • The PluginCapability must support VOLUME_ACCESSIBILITY_CONSTRAINTS.
  • The plugin must fill in accessible_topology in NodeGetInfoResponse. This information will be used to populate the Kubernetes CSINode object and add the topology labels to the Node object.
  • During CreateVolume, the topology information will get passed in through CreateVolumeRequest.accessibility_requirements.

In the StorageClass object, both volumeBindingMode values of Immediate and WaitForFirstConsumer are supported.

  • If Immediate is set, then the external-provisioner will pass in all available topologies in the cluster for the driver.
  • If WaitForFirstConsumer is set, then the external-provisioner will wait for the scheduler to pick a node. The topology of that selected node will then be set as the first entry in CreateVolumeRequest.accessibility_requirements.preferred. All remaining topologies are still included in the requisite and preferred fields to support storage systems that span across multiple topologies.

Sidecar Deployment

The topology feature requires the external-provisioner sidecar with the Topology feature gate enabled:

--feature-gates=Topology=true

Kubernetes Cluster Setup

Beta

In the Kubernetes cluster the CSINodeInfo feature must be enabled on both Kubernetes master and nodes (refer to the CSINode Object section for more info):

--feature-gates=CSINodeInfo=true

In order to fully function properly, all Kubernetes master and nodes must be on at least Kubernetes 1.14. If a selected node is on a lower version, topology is ignored and not passed to the driver during CreateVolume.

Alpha

The alpha feature in the external-provisioner is not compatible across Kubernetes versions. In addition, Kubernetes master and node version skew and upgrades are not supported.

The CSINodeInfo, VolumeScheduling, and KubeletPluginsWatcher feature gates must be enabled on both Kubernetes master and nodes.

The CSINodeInfo CRDs also have to be manually installed in the cluster.

Storage Internal Topology

Note that a storage system may also have an "internal topology" different from (independent of) the topology of the cluster where workloads are scheduled. Meaning volumes exposed by the storage system are equally accessible by all nodes in the Kubernetes cluster, but the storage system has some internal topology that may influence, for example, the performance of a volume from a given node.

CSI does not currently expose a first class mechanism to influence such storage system internal topology on provisioning. Therefore, Kubernetes can not programmatically influence such topology. However, a CSI Driver may expose the ability to specify internal storage topology during volume provisioning using an opaque parameter in the CreateVolume CSI call (CSI enables CSI Drivers to expose an arbitrary set of configuration options during dynamic provisioning by allowing opaque parameters to be passed from cluster admins to the storage plugins) -- this would enable cluster admins to be able to control the storage system internal topology during provisioning.

# Raw Block Volume Feature

Status

StatusMin K8s VersionMax K8s Versionexternal-provisioner Versionexternal-attacher Version
Alpha1.111.130.40.4
Alpha1.131.131.01.0
Beta1.141.171.1+1.1+
GA1.18-1.1+1.1+

Overview

This page documents how to implement raw block volume support to a CSI Driver.

A block volume is a volume that will appear as a block device inside the container. A mounted (file) volume is volume that will be mounted using a specified file system and appear as a directory inside the container.

The CSI spec supports both block and mounted (file) volumes.

Implementing Raw Block Volume Support in Your CSI Driver

CSI doesn't provide a capability query for block volumes, so COs will simply pass through requests for block volume creation to CSI plugins, and plugins are allowed to fail with the InvalidArgument GRPC error code if they don't support block volumes. Kubernetes doesn't make any assumptions about which CSI plugins support blocks and which don't, so users have to know if any given storage class is capable of creating block volumes.

The difference between a request for a mounted (file) volume and a block volume is the VolumeCapabilities field of the request. Note that this field is an array and the created volume must support ALL of the capabilities requested, or else return an error. If the AccessType method of a VolumeCapability is VolumeCapability_Block, then the capability is requesting a raw block volume. Unlike mount volumes, block volumes don't have any specific capabilities that need to be validated, although access modes still apply.

Block volumes are much more likely to support multi-node flavors of VolumeCapability_AccessMode_Mode than mount volumes, because there's no file system state stored on the node side that creates any technical impediments to multi-attaching block volumes. While there may still be good reasons to prevent multi-attaching block volumes, and there may be implementations that are not capable of supporting multi-attach, you should think carefully about what makes sense for your driver.

CSI plugins that support both mount and block volumes must be sure to check the capabilities of all CSI RPC requests and ensure that the capability of the request matches the capability of the volume, to avoid trying to do file-system-related things to block volumes and block-related things to file system volumes. The following RPCs specify capabilities that must be validated:

  • CreateVolume() (multiple capabilities)
  • ControllerPublishVolume()
  • ValidateVolumeCapabilities() (multiple capabilities)
  • GetCapacity() (see below)
  • NodeStageVolume()
  • NodePublishVolume()

Also, CSI plugins that implement the optional GetCapacity() RPC should note that that RPC includes capabilities too, and if the capacity for mount volumes is not the same as the capacity for block volumes, that needs to be handled in the implementation of that RPC.

Q: Can CSI plugins support only block volumes and not mount volumes? A: Yes! This is just the reverse case of supporting mount volumes only. Plugins may return InvalidArgument for any creation request with an AccessType of VolumeCapability_Mount.

Differences Between Block and Mount Volumes

The main difference between block volumes and mount volumes is the expected result of the NodePublish(). For mount volumes, the CO expects the result to be a mounted directory, at TargetPath. For block volumes, the CO expects there to be a device file at TargetPath. The device file can be a bind-mounted device from the hosts /dev file system, or it can be a device node created at that location using mknod().

It's desirable but not required to expose an unfiltered device node. For example, CSI plugins based on technologies that implement SCSI protocols should expect that pods consuming the block volumes they create may want to send SCSI commands to the device. This is something that should "just work" by default (subject to container capabilities) so CSI plugins should avoid anything that would break this kind of use case. The only hard requirement is that the device implements block reading/writing however.

For plugins with the RPC_STAGE_UNSTAGE_VOLUME capability, the CO doesn't care exactly what is placed at the StagingTargetPath, but it's worth noting that some CSI RPCs are allowed to pass the plugin either a staging path or a publish path, so it's important to think carefully about how NodeStageVolume() is implemented, knowing that either path could get used by the CO to refer to the volume later on. This is made more challenging because the CSI spec says that StagingTargetPath is always a directory even for block volumes.

Sidecar Deployment

The raw block feature requires the external-provisioner and external-attacher sidecars to be deployed.

Kubernetes Cluster Setup

The BlockVolume and CSIBlockVolume feature gates need to be enabled on all Kubernetes masters and nodes.

--feature-gates=BlockVolume=true,CSIBlockVolume=true...
  • TODO: detail how Kubernetes API raw block fields get mapped to CSI methods/fields.

Skip Kubernetes Attach and Detach

Status

StatusMin K8s VersionMax K8s Versioncluster-driver-registrar Version
Alpha1.121.120.4
Alpha1.131.131.0
Beta1.141.17n/a
GA1.18-n/a

Overview

Volume drivers, like NFS, for example, have no concept of an attach (ControllerPublishVolume). However, Kubernetes always executes Attach and Detach operations even if the CSI driver does not implement an attach operation (i.e. even if the CSI Driver does not implement a ControllerPublishVolume call).

This was problematic because it meant all CSI drivers had to handle Kubernetes attachment. CSI Drivers that did not implement the PUBLISH_UNPUBLISH_VOLUME controller capability could work around this by deploying an external-attacher and the external-attacher would responds to Kubernetes attach operations and simply do a noop (because the CSI driver did not advertise the PUBLISH_UNPUBLISH_VOLUME controller capability).

Although the workaround works, it adds an unnecessary operation (round-trip) in the preparation of a volume for a container, and requires CSI Drivers to deploy an unnecessary sidecar container (external-attacher).

Skip Attach with CSI Driver Object

The CSIDriver Object enables CSI Drivers to specify how Kubernetes should interact with it.

Specifically the attachRequired field instructs Kubernetes to skip any attach operation altogether.

For example, the existence of the following object would cause Kubernetes to skip attach operations for all CSI Driver testcsidriver.example.com volumes.

apiVersion: storage.k8s.io/v1
kind: CSIDriver
metadata:
  name: testcsidriver.example.com
spec:
  attachRequired: false

CSIDriver object should be manually included in the driver deployment manifests.

Previously, the cluster-driver-registrar sidecar container could be deployed to automatically create the object. Once the flags to this container are configured correctly, it will automatically create a CSIDriver Object when it starts with the correct fields set.

Alpha Functionality

In alpha, this feature was enabled via the CSIDriver Object CRD.

apiVersion: csi.storage.k8s.io/v1alpha1
kind: CSIDriver
metadata:
....

Pod Info on Mount

Status

StatusMin K8s VersionMax K8s Versioncluster-driver-registrar Version
Alpha1.121.120.4
Alpha1.131.131.0
Beta1.141.17n/a
GA1.18-n/a

Overview

CSI avoids encoding Kubernetes specific information in to the specification, since it aims to support multiple orchestration systems (beyond just Kubernetes).

This can be problematic because some CSI drivers require information about the workload (e.g. which pod is referencing this volume), and CSI does not provide this information natively to drivers.

Pod Info on Mount with CSI Driver Object

The CSIDriver Object enables CSI Drivers to specify how Kubernetes should interact with it.

Specifically the podInfoOnMount field instructs Kubernetes that the CSI driver requires additional pod information (like podName, podUID, etc.) during mount operations.

For example, the existence of the following object would cause Kubernetes to add pod information at mount time to the NodePublishVolumeRequest.volume_context map.

apiVersion: storage.k8s.io/v1
kind: CSIDriver
metadata:
  name: testcsidriver.example.com
spec:
  podInfoOnMount: true

If the podInfoOnMount field is set to true, during mount, Kubelet will add the following key/values to the volume_context field in the CSI NodePublishVolumeRequest:

  • csi.storage.k8s.io/pod.name: {pod.Name}
  • csi.storage.k8s.io/pod.namespace: {pod.Namespace}
  • csi.storage.k8s.io/pod.uid: {pod.UID}
  • csi.storage.k8s.io/serviceAccount.name: {pod.Spec.ServiceAccountName}

The CSIDriver object should be manually included in the driver manifests.

Previously, the cluster-driver-registrar sidecar container could be used to create the object. Once the flags to this container are configured correctly, it will automatically create a CSIDriver Object when it starts with the correct fields set.

Alpha Functionality

In alpha, this feature was enabled by setting the podInfoOnMountVersion field in the CSIDriver Object CRD to v1.

apiVersion: csi.storage.k8s.io/v1alpha1
kind: CSIDriver
metadata:
  name: testcsidriver.example.com
spec:
  podInfoOnMountVersion: v1

Volume Expansion

Status

StatusMin K8s VersionMax K8s Versionexternal-resizer Version
Alpha1.141.150.2
Beta1.16-0.3

Overview

A storage provider that allows volume expansion after creation, may choose to implement volume expansion either via a control-plane CSI RPC call or via node CSI RPC call or both as a two step process.

Implementing Volume expansion functionality

To implement volume expansion the CSI driver MUST:

  1. Implement VolumeExpansion plugin capability.
  2. Implement EXPAND_VOLUME controller capability or implement EXPAND_VOLUME node capability or both.

ControllerExpandVolume RPC call can be made when volume is ONLINE or OFFLINE depending on VolumeExpansion plugin capability. Where ONLINE and OFFLINE means:

  1. ONLINE : Volume is currently published or available on a node.
  2. OFFLINE : Volume is currently not published or available on a node.

NodeExpandVolume RPC call on the other hand - always requires volume to be published or staged on a node (and hence ONLINE). For block storage file systems - NodeExpandVolume is typically used for expanding the file system on the node, but it can be also used to perform other volume expansion related housekeeping operations on the node.

For details, see the CSI spec.

Deploying volume expansion functionality

The Kubernetes CSI development team maintains external-resizer Kubernetes CSI Sidecar Containers. This sidecar container implements the logic for watching the Kubernetes API for Persistent Volume claim edits and issuing ControllerExpandVolume RPC call against a CSI endpoint and updating PersistentVolume object to reflect new size.

This sidecar is needed even if CSI driver does not have EXPAND_VOLUME controller capability, in this case it performs a NO-OP expansion and updates PersistentVolume object. NodeExpandVolume is always called by Kubelet on the node.

For more details, see external-resizer.

Enabling Volume expansion for CSI volumes in Kubernetes

To expand a volume if permitted by the storage class, users just need to edit the persistent volume claim object and request more storage.

In Kubernetes 1.14 and 1.15, this feature was in alpha status and required enabling the following feature gate:

--feature-gates=ExpandCSIVolumes=true

Also in Kubernetes 1.14 and 1.15, online expansion had to be enabled explicitly:

--feature-gates=ExpandInUsePersistentVolumes=true

external-resizer and kubelet add appropriate events and conditions to persistent volume claim objects indicating progress of volume expansion operations.

Kubernetes PVC DataSource (CSI VolumeContentSource)

When creating a new PersistentVolumeClaim, the Kubernetes API provides a PersistentVolumeClaim.DataSource parameter. This parameter is used to specify the CSI CreateVolumeRequest.VolumeContentSource option for CSI Provisioners. The VolumeContentSource parameter instructs the CSI plugin to pre-populate the volume being provisioned with data from the specified source.

External Provisioner Responsibilities

If a DataSource is specified in the CreateVolume call to the CSI external provisioner, the external provisioner will fetch the specified resource and pass the appropriate object id to the plugin.

Supported DataSources

Currently there are two types of PersistentVolumeClaim.DataSource objects that are supported:

  1. VolumeSnapshot
  2. PersistentVolumeClaim (Cloning)

Volume Cloning

Status and Releases

StatusMin k8s VersionMax k8s versionexternal-provisioner Version
Alpha1.151.151.3
Beta1.161.171.4
GA1.18-1.6

Overview

A Clone is defined as a duplicate of an existing Kubernetes Volume. For more information on cloning in Kubernetes see the concepts doc for Volume Cloning. A storage provider that allows volume cloning as a create feature, may choose to implement volume cloning via a control-plan CSI RPC call.

For details regarding the kubernetes API for volume cloning, please see kubernetes concepts.

Implementing Volume cloning functionality

To implement volume cloning the CSI driver MUST:

  1. Implement checks for csi.CreateVolumeRequest.VolumeContentSource in the plugin's CreateVolume function implementation.
  2. Implement CLONE_VOLUME controller capability.

It is the responsibility of the storage plugin to either implement an expansion after clone if a provision request size is greater than the source, or allow the external-resizer to handle it. In the case that the plugin does not support resize capability and it does not have the capability to create a clone that is greater in size than the specified source volume, then the provision request should result in a failure.

Deploying volume clone functionality

The Kubernetes CSI development team maintains the external-provisioner which is responsible for detecting requests for a PVC DataSource and providing that information to the plugin via the csi.CreateVolumeRequest. It's up to the plugin to check the csi.CreateVolumeRequest for a VolumeContentSource entry in the CreateVolumeRequest object.

There are no additional side-cars or add on components required.

Enabling Cloning for CSI volumes in Kubernetes

Volume cloning was promoted to Beta in version 1.16 and GA in 1.18, and as such is enabled by defult for kubernetes versions >= 1.16

In Kubernetes 1.15 this feature was alpha status and required enabling the appropriate feature gate:

--feature-gates=VolumePVCDataSource=true

Example implementation

A trivial example implementation can be found in the csi-hostpath plugin in its implementation of CreateVolume.

Snapshot & Restore Feature

Status

StatusMin K8s VersionMax K8s Versionsnapshot-controller Versionsnapshot-validation-webhook VersionCSI external-snapshotter sidecar Versionexternal-provisioner Version
Alpha1.121.120.4.0 <= version < 1.00.4.1 <= version < 1.0
Alpha1.131.161.0.1 <= version < 2.01.0.1 <= version < 1.5
Beta1.17-2.0+3.0+2.0+1.5+

Overview

Many storage systems provide the ability to create a "snapshot" of a persistent volume. A snapshot represents a point-in-time copy of a volume. A snapshot can be used either to provision a new volume (pre-populated with the snapshot data) or to restore the existing volume to a previous state (represented by the snapshot).

Kubernetes CSI currently enables CSI Drivers to expose the following functionality via the Kubernetes API:

  1. Creation and deletion of volume snapshots via Kubernetes native API.
  2. Creation of new volumes pre-populated with the data from a snapshot via Kubernetes dynamic volume provisioning.

Note: Documentation under https://kubernetes.io/docs is for the latest Kubernetes release. Documentation for earlier releases are stored in different location. For example, this is the documentation location for v1.16.

Implementing Snapshot & Restore Functionality in Your CSI Driver

To implement the snapshot feature, a CSI driver MUST:

  • Implement the CREATE_DELETE_SNAPSHOT and, optionally, the LIST_SNAPSHOTS controller capabilities
  • Implement CreateSnapshot, DeleteSnapshot, and, optionally, the ListSnapshots, controller RPCs.

For details, see the CSI spec.

Sidecar Deployment

The Kubernetes CSI development team maintains the external-snapshotter Kubernetes CSI Sidecar Containers. This sidecar container implements the logic for watching the Kubernetes API objects and issuing the appropriate CSI snapshot calls against a CSI endpoint. For more details, see external-snapshotter documentation.

Snapshot Beta

Snapshot APIs

With the promotion of Volume Snapshot to beta, the feature is now enabled by default on standard Kubernetes deployments instead of being opt-in. This involves a revamp of volume snapshot APIs.

The schema definition for the custom resources (CRs) can be found here. The CRDs are no longer automatically deployed by the sidecar. They should be installed by the Kubernetes distributions.

Highlights in the snapshot v1beta1 APIs

  • DeletionPolicy is a required field in both VolumeSnapshotClass and VolumeSnapshotContent. This way the user has to explicitly specify it, leaving no room for confusion.
  • VolumeSnapshotSpec has a required Source field. Source may be either a PersistentVolumeClaimName (if dynamically provisioning a snapshot) or VolumeSnapshotContentName (if pre-provisioning a snapshot).
  • VolumeSnapshotContentSpec has a required Source field. This Source may be either a VolumeHandle (if dynamically provisioning a snapshot) or a SnapshotHandle (if pre-provisioning volume snapshots).
  • VolumeSnapshot contains a Status to indicate the current state of the volume snapshot. It has a field BoundVolumeSnapshotContentName to indicate the VolumeSnapshot object is bound to a VolumeSnapshotContent.
  • VolumeSnapshotContent contains a Status to indicate the current state of the volume snapshot content. It has a field SnapshotHandle to indicate that the VolumeSnapshotContent represents a snapshot on the storage system.

Controller Split

  • The CSI external-snapshotter sidecar is split into two controllers, a snapshot controller and a CSI external-snapshotter sidecar.

The snapshot controller is deployed by the Kubernetes distributions and is responsible for watching the VolumeSnapshot CRD objects and manges the creation and deletion lifecycle of snapshots.

The CSI external-snapshotter sidecar watches Kubernetes VolumeSnapshotContent CRD objects and triggers CreateSnapshot/DeleteSnapshot against a CSI endpoint.

Snapshot Validation Webhook

There is a new validating webhook server which provides tightened validation on snapshot objects. This SHOULD be installed by the Kubernetes distros along with the snapshot-controller, not end users. It SHOULD be installed in all Kubernetes clusters that has the snapshot feature enabled. See Snapshot Validation Webhook for more details on how to use the webhook.

Kubernetes Cluster Setup

Volume snapshot is promoted to beta in Kubernetes 1.17 so the VolumeSnapshotDataSource feature gate is enabled by default.

See the Deployment section of Snapshot Controller on how to set up the snapshot controller and CRDs.

See the Deployment section of Snapshot Validation Webhook for more details on how to use the webhook.

Test Snapshot Feature

To test snapshot Beta version, use the following example yaml files.

Create a StorageClass:

kubectl create -f storageclass.yaml

Create a PVC:

kubectl create -f pvc.yaml

Create a VolumeSnapshotClass:

kubectl create -f snapshotclass.yaml

Create a VolumeSnapshot:

kubectl create -f snapshot.yaml

Create a PVC from a VolumeSnapshot:

kubectl create -f restore.yaml

Snapshot Alpha

Snapshot APIs

Similar to the API for managing Kubernetes Persistent Volumes, the Kubernetes Volume Snapshots introduce three new API objects for managing snapshots: VolumeSnapshot, VolumeSnapshotContent, and VolumeSnapshotClass. See Kubernetes Snapshot documentation for more details.

Unlike the core Kubernetes Persistent Volume objects, these Snapshot objects are defined as Custom Resource Definitions (CRDs). This is because the Kubernetes project is moving away from having resource types pre-defined in the API server. This allows the API server to be reused for projects other than Kubernetes, and consumers (like Kubernetes) simply install the resource types they require as CRDs. Because the Snapshot API types are not built in to Kubernetes, they must be installed prior to use.

The CRDs are automatically deployed by the CSI external-snapshotter sidecar. See Alpha section of the sidecar doc here.

The schema definition for the custom resources (CRs) can be found here.

In addition to these new CRD objects, a new, alpha DataSource field has been added to the PersistentVolumeClaim object. This new field enables dynamic provisioning of new volumes that are automatically pre-populated with data from an existing snapshot.

Kubernetes Cluster Setup

Since volume snapshot is an alpha feature in Kubernetes v1.12 to v1.16, you need to enable a new alpha feature gate called VolumeSnapshotDataSource in the Kubernetes master.

--feature-gates=VolumeSnapshotDataSource=true

Test Snapshot Feature

To test snapshot Alpha version, use the following example yaml files.

Create a StorageClass:

kubectl create -f storageclass.yaml

Create a PVC:

kubectl create -f pvc.yaml

Create a VolumeSnapshotClass:

kubectl create -f snapshotclass.yaml

Create a VolumeSnapshot:

kubectl create -f snapshot.yaml

Create a PVC from a VolumeSnapshot:

kuberctl create -f restore.yaml

PersistentVolumeClaim not Bound

If a PersistentVolumeClaim is not bound, the attempt to create a volume snapshot from that PersistentVolumeClaim will fail. No retries will be attempted. An event will be logged to indicate that the PersistentVolumeClaim is not bound.

Note that this could happen if the PersistentVolumeClaim spec and the VolumeSnapshot spec are in the same YAML file. In this case, when the VolumeSnapshot object is created, the PersistentVolumeClaim object is created but volume creation is not complete and therefore the PersistentVolumeClaim is not yet bound. You must wait until the PersistentVolumeClaim is bound and then create the snapshot.

Examples

See the Drivers for a list of CSI drivers that implement the snapshot feature.

Pod Inline Volume Support

Status

CSI Ephemeral Inline Volumes

StatusMin K8s VersionMax K8s Version
Alpha1.151.15
Beta1.161.24
GA1.25

Generic Ephemeral Inline Volumes

StatusMin K8s VersionMax K8s Version
Alpha1.191.20
Beta1.211.22
GA1.23

Overview

Traditionally, volumes that are backed by CSI drivers can only be used with a PersistentVolume and PersistentVolumeClaim object combination. Two different Kubernetes features allow volumes to follow the Pod's lifecycle: CSI ephemeral volumes and generic ephemeral volumes.

In both features, the volumes are specified directly in the pod specification for ephemeral use cases. At runtime, nested inline volumes follow the ephemeral lifecycle of their associated pods where Kubernetes and the driver handle all phases of volume operations as pods are created and destroyed.

However, the two features are targeted at different use cases and thus have different APIs and different implementations.

See the CSI inline volumes and generic ephemeral volumes enhancement proposals for design details. The user facing documentation for both features is in the Kubernetes documentation.

Which feature should my driver support?

CSI ephemeral inline volumes are meant for simple, local volumes. All parameters that determine the content of the volume can be specified in the pod spec, and only there. Storage classes are not supported and all parameters are driver specific.

apiVersion: v1
kind: Pod
metadata:
  name: some-pod
spec:
  containers:
    ...
  volumes:
      - name: vol
        csi:
          driver: inline.storage.kubernetes.io
          volumeAttributes:
              foo: bar

A CSI driver is suitable for CSI ephemeral inline volumes if:

  • it serves a special purpose and needs custom per-volume parameters, like drivers that provide secrets to a pod
  • it can create volumes when running on a node
  • fast volume creation is needed
  • resource usage on the node is small and/or does not need to be exposed to Kubernetes
  • rescheduling of pods onto a different node when storage capacity turns out to be insufficient is not needed
  • none of the usual volume features (restoring from snapshot, cloning volumes, etc.) are needed
  • ephemeral inline volumes have to be supported on Kubernetes clusters which do not support generic ephemeral volumes

A CSI driver is not suitable for CSI ephemeral inline volumes when:

  • provisioning is not local to the node
  • ephemeral volume creation requires volumeAttributes that should be restricted to an administrator, for example parameters that are otherwise set in a StorageClass or PV. Ephemeral inline volumes allow these attributes to be set directly in the Pod spec, and so are not restricted to an admin.

Generic ephemeral inline volumes make the normal volume API (storage classes, PersistentVolumeClaim) usable for ephemeral inline volumes.

kind: Pod
apiVersion: v1
metadata:
  name: some-pod
spec:
  containers:
     ...
  volumes:
    - name: scratch-volume
      ephemeral:
        volumeClaimTemplate:
          metadata:
            labels:
              type: my-frontend-volume
          spec:
            accessModes: [ "ReadWriteOnce" ]
            storageClassName: "scratch-storage-class"
            resources:
              requests:
                storage: 1Gi

A CSI driver is suitable for generic ephemeral inline volumes if it supports dynamic provisioning of volumes. No other changes are needed in the driver in that case. Such a driver can also support CSI ephemeral inline volumes if desired.

Security Considerations

CSI driver vendors that choose to support ephemeral inline volumes are responsible for secure handling of these volumes, and special consideration needs to be given to what volumeAttributes are supported by the driver. As noted above, a CSI driver is not suitable for CSI ephemeral inline volumes when volume creation requires volumeAttributes that should be restricted to an administrator. These attributes are set directly in the Pod spec, and therefore are not automatically restricted to an administrator when used as an inline volume.

CSI inline volumes are only intended to be used for ephemeral storage, and driver vendors should NOT allow usage of inline volumes for persistent storage unless they also provide a third party pod admission plugin to restrict usage of these volumes.

Cluster administrators who need to restrict the CSI drivers that are allowed to be used as inline volumes within a Pod spec may do so by:

  • Removing Ephemeral from volumeLifecycleModes in the CSIDriver spec, which prevents the driver from being used as an inline ephemeral volume.
  • Using an admission webhook to restrict how this driver is used.

Implementing CSI ephemeral inline support

Drivers must be modified (or implemented specifically) to support CSI inline ephemeral workflows. When Kubernetes encounters an inline CSI volume embedded in a pod spec, it treats that volume differently. Mainly, the driver will only receive NodePublishVolume, during the volume's mount phase, and NodeUnpublishVolume when the pod is going away and the volume is unmounted.

Due to these requirements, ephemeral volumes will not be created using the Controller Service, but the Node Service, instead. When the kubelet calls NodePublishVolume, it is the responsibility of the CSI driver to create the volume during that call, then publish the volume to the specified location. When the kubelet calls NodeUnpublishVolume, it is the responsibility of the CSI driver to delete the volume.

To support inline, a driver must implement the followings:

  • Identity service
  • Node service

CSI Extension Specification

NodePublishVolume

Arguments
  • volume_id: Volume ID will be created by the Kubernetes and passed to the driver by the kubelet.
  • volume_context["csi.storage.k8s.io/ephemeral"]: This value will be available and it will be equal to "true".
Workflow

The driver will receive the appropriate arguments as defined above when an ephemeral volume is requested. The driver will create and publish the volume to the specified location as noted in the NodePublishVolume request. Volume size and any other parameters required will be passed in verbatim from the inline manifest parameters to the NodePublishVolumeRequest.volume_context.

There is no guarantee that NodePublishVolume will be called again after a failure, regardless of what the failure is. To avoid leaking resources, a CSI driver must either always free all resources before returning from NodePublishVolume on error or implement some kind of garbage collection.

NodeUnpublishVolume

Arguments

No changes

Workflow

The driver is responsible of deleting the ephemeral volume once it has unpublished the volume. It MAY delete the volume before finishing the request, or after the request to unpublish is returned.

Read-Only Volumes

It is possible for a CSI driver to provide volumes to Pods as read-only while allowing them to be writeable on the node for kubelet, the driver, and the container runtime. This allows the CSI driver to dynamically update contents of the volume without exposing issues like CVE-2017-1002102, since the volume is read-only for the end user. It also allows the fsGroup and SELinux context of files to be applied on the node so the Pod gets the volume with the expected permissions and SELinux label.

To benefit from this behavior, the following can be implemented in the CSI driver:

  • The driver provides an admission plugin that sets ReadOnly: true to all volumeMounts of such volumes. We can't trust that this will be done by every user on every pod.
  • The driver checks that the readonly flag is set in all NodePublish requests. We can't trust that the admission plugin above is deployed on every cluster.
  • When both conditions above are satisfied, the driver MAY ignore the readonly flag in NodePublish and set up the volume as read-write. Ignoring the readonly flag in NodePublish is considered valid CSI driver behavior for inline ephemeral volumes.

The presence of ReadOnly: true in the Pod spec tells kubelet to bind-mount the volume to the container as read-only, while the underlying mount is read-write on the host. This is the same behavior used for projected volumes like Secrets and ConfigMaps.

CSIDriver

Kubernetes only allows using a CSI driver for an inline volume if its CSIDriver object explicitly declares that the driver supports that kind of usage in its volumeLifecycleModes field. This is a safeguard against accidentally using a driver the wrong way.

References

Volume Limits

Status

StatusMin K8s VersionMax K8s Version
Alpha1.111.11
Beta1.121.16
GA1.17-

Overview

Some storage providers may have a restriction on the number of volumes that can be used in a Node. This is common in cloud providers, but other providers might impose restriction as well.

Kubernetes will respect this limit as long the CSI driver advertises it. To support volume limits in a CSI driver, the plugin must fill in max_volumes_per_node in NodeGetInfoResponse.

It is recommended that CSI drivers allow for customization of volume limits. That way cluster administrators can distribute the limits of the same storage backends (e.g. iSCSI) accross different drivers, according to their individual needs.

Storage Capacity Tracking

Status

StatusMin K8s VersionMax K8s Version
Alpha1.19-

Overview

Storage capacity tracking allows the Kubernetes scheduler to make more informed choices about where to start pods which depend on unbound volumes with late binding (aka "wait for first consumer"). Without storage capacity tracking, a node is chosen without knowing whether those volumes can be made available for the node. Volume creation is attempted and if that fails, the pod has to be rescheduled, potentially landing on the same node again. With storage capacity tracking, the scheduler filters out nodes which do not have enough capacity.

For design information, see the enhancement proposal.

Usage

To support rescheduling of a pod, a CSI driver deployment must:

  • return the ResourceExhausted gRPC status code in CreateVolume if capacity is exhausted
  • use external-provisioner >= 1.6.0 because older releases did not properly support rescheduling after a ResourceExhausted error

To support storage capacity tracking, a CSI driver deployment must:

Further information can be found in the Kubernetes documentation.

Volume Health Monitoring Feature

Status

StatusMin K8s VersionMax K8s Versionexternal-health-monitor-controller Version
Alpha1.21-0.8.0

Overview

The External Health Monitor is part of Kubernetes implementation of Container Storage Interface (CSI). It was introduced as an Alpha feature in Kubernetes v1.19. In Kubernetes 1.21, a second Alpha was done due to a design change which deprecated External Health Monitor Agent.

The External Health Monitor is implemented as two components: External Health Monitor Controller and Kubelet.

  • External Health Monitor Controller:

    • The external health monitor controller will be deployed as a sidecar together with the CSI controller driver, similar to how the external-provisioner sidecar is deployed.
    • Trigger controller RPC to check the health condition of the CSI volumes.
    • The external controller sidecar will also watch for node failure events. This component can be enabled via a flag.
  • Kubelet:

    • In addition to existing volume stats collected already, Kubelet will also check volume's mounting conditions collected from the same CSI node RPC and log events to Pods if volume condition is abnormal.

The Volume Health Monitoring feature need to invoke the following CSI interfaces.

  • External Health Monitor Controller:
    • ListVolumes (If both ListVolumes and ControllerGetVolume are supported, ListVolumes will be used)
    • ControllerGetVolume
  • Kubelet:
    • NodeGetVolumeStats
    • This feature in Kubelet is controlled by an Alpha feature gate CSIVolumeHealth.

See external-health-monitor-controller.md for more details on the CSI external-health-monitor-controller sidecar.

Token Requests

Status

StatusMin K8s VersionMax K8s Version
Alpha1.201.20
Beta1.211.21
GA1.22-

Overview

This feature allows CSI drivers to impersonate the pods that they mount the volumes for. This improves the security posture in the mounting process where the volumes are ACL’ed on the pods’ service account without handing out unnecessary permissions to the CSI drivers’ service account. This feature is especially important for secret-handling CSI drivers, such as the secrets-store-csi-driver. Since these tokens can be rotated and short-lived, this feature also provides a knob for CSI drivers to receive NodePublishVolume RPC calls periodically with the new token. This knob is also useful when volumes are short-lived, e.g. certificates.

See more details at the design document.

Usage

This feature adds two fields in CSIDriver spec:

type CSIDriverSpec struct {
    ... // existing fields

    RequiresRepublish *bool
    TokenRequests []TokenRequest
}

type TokenRequest struct {
    Audience string
    ExpirationSeconds *int64
}
  • TokenRequest.Audience:

    • This is a required field.
    • Audiences should be distinct, otherwise the validation will fail.
    • If it is empty string, the audience of the token is the APIAudiences of kube-apiserver. one of the audiences specified.
    • See more about audience specification here
  • TokenRequest.ExpirationSeconds:

    • The field is optional.
    • It has to be at least 10 minutes (600 seconds) and no more than 1 << 32 seconds.
  • RequiresRepublish:

    • This field is optional.
    • If this is true, NodePublishVolume will be periodically called. When used with TokenRequest, the token will be refreshed if it expired. NodePublishVolume should only change the contents rather than the mount because container will not be restarted to reflect the mount change. The period between NodePublishVolume is 0.1s.

The token will be bounded to the pod that the CSI driver is mounting volumes for and will be set in VolumeContext:

"csi.storage.k8s.io/serviceAccount.tokens": {
  <audience>: {
    'token': <token>,
    'expirationTimestamp': <expiration timestamp in RFC3339 format>,
  },
  ...
}

If CSI driver doesn't find token recorded in the volume_context, it should return error in NodePublishVolume to inform Kubelet to retry.

Example

Here is an example of a CSIDriver object:

apiVersion: storage.k8s.io/v1
kind: CSIDriver
metadata:
  name: mycsidriver.example.com
spec:
  tokenRequests:
    - audience: "gcp"
    - audience: ""
      expirationSeconds: 3600
  requiresRepublish: true

Feature gate

Kube apiserver must start with the CSIServiceAccountToken feature gate enabled:

--feature-gates=CSIServiceAccountToken=true

It is enabled by default in Kubernetes 1.21 and cannot be disabled since 1.22.

Example CSI Drivers

  • secrets-store-csi-driver
    • With GCP, the driver will pass the token to GCP provider to exchange for GCP credentials, and then request secrets from Secret Manager.
    • With Vault, the Vault provider will send the token to Vault which will use the token in TokenReview request to authenticate.
    • With Azure, the driver will pass the token to Azure provider to exchange for Azure credentials, and then request secrets from Key Vault.

CSI Driver fsGroup Support

There are two features related to supporting fsGroup for the CSI driver: CSI volume fsGroup policy and delegating fsGroup to CSI driver. For more information about using fsGroup in Kubernetes, please refer to the Kubernetes documentation on Pod security context.

CSI Volume fsGroup Policy

Status

StatusMin K8s VersionMax K8s Version
Alpha1.191.19
Beta1.201.22
GA1.23-

Overview

CSI Drivers can indicate whether or not they support modifying a volume's ownership or permissions when the volume is being mounted. This can be useful if the CSI Driver does not support the operation, or wishes to re-use volumes with constantly changing permissions.

See the design document for further information.

Example Usage

When creating the CSI Driver object, fsGroupPolicy is defined in the driver's spec. The following shows the hostpath driver with None included, indicating that the volumes should not be modified when mounted:

apiVersion: storage.k8s.io/v1
kind: CSIDriver
metadata:
  name: hostpath.csi.k8s.io
spec:
  # Supports persistent and ephemeral inline volumes.
  volumeLifecycleModes:
  - Persistent
  - Ephemeral
  # To determine at runtime which mode a volume uses, pod info and its
  # "csi.storage.k8s.io/ephemeral" entry are needed.
  podInfoOnMount: true
  fsGroupPolicy: None

Supported Modes

  • The following modes are supported:
    • None: Indicates that volumes will be mounted with no modifications, as the CSI volume driver does not support these operations.
    • File: Indicates that the CSI volume driver supports volume ownership and permission change via fsGroup, and Kubernetes may use fsGroup to change permissions and ownership of the volume to match user requested fsGroup in the pod's SecurityPolicy regardless of fstype or access mode.
    • ReadWriteOnceWithFSType: Indicates that volumes will be examined to determine if volume ownership and permissions should be modified to match the pod's security policy. Changes will only occur if the fsType is defined and the persistent volume's accessModes contains ReadWriteOnce. .

If undefined, fsGroupPolicy will default to ReadWriteOnceWithFSType, keeping the previous behavior.

Feature Gates

To use this field, Kubernetes 1.19 binaries must start with the CSIVolumeFSGroupPolicy feature gate enabled:

--feature-gates=CSIVolumeFSGroupPolicy=true

This is enabled by default on 1.20 and higher.

Delegate fsGroup to CSI Driver

Status

StatusMin K8s VersionMax K8s Version
Alpha1.221.22
Beta1.23-
GA1.26-

Overview

For most drivers, kubelet applies the fsGroup specified in a Pod spec by recursively changing volume ownership during the mount process. This does not work for certain drivers. For example:

  • A driver requires passing fsGroup to mount options in order for it to take effect.
  • A driver needs to apply fsGroup at the stage step (NodeStageVolume in CSI; MountDevice in Kubernetes) instead of the mount step (NodePublishVolume in CSI; SetUp/SetUpAt in Kubernetes).

This feature provides a mechanism for the driver to apply fsGroup instead of kubelet. Specifically, it passes fsGroup to the CSI driver through NodeStageVolume and NodePublishVolume calls, and the kubelet fsGroup logic is disabled. The driver is expected to apply the fsGroup within one of these calls.

If this feature is enabled in Kubernetes and a volume uses a driver that supports this feature, CSIDriver.spec.fsGroupPolicy and Pod.spec.securityContext.fsGroupChangePolicy are ignored.

See the design document and the description of the VolumeCapability.MountVolume.volume_mount_group field in the CSI spec for further information.

Usage

The CSI driver must implement the VOLUME_MOUNT_GROUP node service capability. The Pod-specified fsGroup will be available in NodeStageVolumeRequest and NodePublishVolumeRequest via VolumeCapability.MountVolume.VolumeMountGroup.

Feature Gates

To use this field, Kubernetes 1.22 binaries must start with the DelegateFSGroupToCSIDriver feature gate enabled:

--feature-gates=DelegateFSGroupToCSIDriver=true

This is enabled by default on 1.23 and higher.

CSI Windows Support

Status

StatusMin K8s VersionMin CSI proxy VersionMin Node Driver Registrar Version
GA1.191.0.01.3.0
Beta1.190.2.01.3.0
Alpha1.180.1.01.3.0

Overview

CSI drivers (e.g. AzureDisk, GCE PD, etc.) are recommended to be deployed as containers. CSI driver’s node plugin typically runs on every worker node in the cluster (as a DaemonSet). Node plugin containers need to run with elevated privileges to perform storage related operations. However, Windows was not supporting privileged containers (Note: privileged containers a.k.a Host process is introduced as alpha feature in Kubernetes 1.22 very recently). To solve this problem, CSI Proxy is a binary that runs on the Windows host and executes a set of privileged storage operations on Windows nodes on behalf of containers in a CSI Node plugin daemonset. This enables multiple CSI Node plugins to execute privileged storage operations on Windows nodes without having to ship a custom privileged operation proxy.

Please note that CSI controller level operations/sidecars are not supported on Windows.

How to use the CSI Proxy for Windows?

See how to install CSI Proxy in the Deployment chapter.

For CSI driver authors, import CSI proxy client under github.com/kubernetes-csi/csi-proxy/client. There are six client API groups including disk, filesystem, iscsi, smb, system, volume. See link for details. As an example, please check how GCE PD Driver import disk, volume and filesystem client API groups here

The Daemonset specification of a CSI node plugin for Windows can mount the desired named pipes from CSI Proxy based on the version of the API groups that the node-plugin needs to execute.

The following Daemonset YAML shows how to mount various API groups from CSI Proxy into a CSI Node plugin:

kind: DaemonSet
apiVersion: apps/v1
metadata:
  name: csi-storage-node-win
spec:
  selector:
    matchLabels:
      app: csi-driver-win
  template:
    metadata:
      labels:
        app: csi-driver-win
    spec:
      serviceAccountName: csi-node-sa
      nodeSelector:
        kubernetes.io/os: windows
      containers:
        - name: csi-driver-registrar
          image: registry.k8s.io/sig-storage/csi-node-driver-registrar
          args:
            - "--v=5"
            - "--csi-address=unix://C:\\csi\\csi.sock"
            - "--kubelet-registration-path=C:\\kubelet\\plugins\\plugin.csi\\csi.sock"
          volumeMounts:
            - name: plugin-dir
              mountPath: C:\csi
            - name: registration-dir
              mountPath: C:\registration
        - name: csi-driver
          image: registry.k8s.io/sig-storage/csi-driver:win-v1
          args:
            - "--v=5"
            - "--endpoint=unix:/csi/csi.sock"
          volumeMounts:
            - name: kubelet-dir
              mountPath: C:\var\lib\kubelet
            - name: plugin-dir
              mountPath: C:\csi
            - name: csi-proxy-disk-pipe
              mountPath: \\.\pipe\csi-proxy-disk-v1
            - name: csi-proxy-volume-pipe
              mountPath: \\.\pipe\csi-proxy-volume-v1
            - name: csi-proxy-filesystem-pipe
              mountPath: \\.\pipe\csi-proxy-filesystem-v1
      volumes:
        - name: csi-proxy-disk-pipe
          hostPath:
            path: \\.\pipe\csi-proxy-disk-v1
            type: ""
        - name: csi-proxy-volume-pipe
          hostPath:
            path: \\.\pipe\csi-proxy-volume-v1
            type: ""
        - name: csi-proxy-filesystem-pipe
          hostPath:
            path: \\.\pipe\csi-proxy-filesystem-v1
            type: ""
        - name: registration-dir
          hostPath:
            path: C:\var\lib\kubelet\plugins_registry\
            type: Directory
        - name: kubelet-dir
          hostPath:
            path: C:\var\lib\kubelet\
            type: Directory
        - name: plugin-dir
          hostPath:
            path: C:\var\lib\kubelet\plugins\csi.org.io\
            type: DirectoryOrCreate

Prevent unauthorised volume mode conversion

Status

StatusMin K8s VersionMax K8s Versionexternal-snapshotter Versionexternal-provisioner Version
Alpha1.24-6.0.1+3.2.1+

Overview

Malicious users can populate the spec.volumeMode field of a PersistentVolumeClaim with a Volume Mode that differs from the original volume's mode to potentially exploit an as-yet-unknown vulnerability in the host operating system. This feature allows cluster administrators to prevent unauthorized users from converting the mode of a volume when a PersistentVolumeClaim is being created from an existing VolumeSnapshot instance.

See the Kubernetes Enhancement Proposal for more details on the background, design and discussions.

Usage

To enable this feature, cluster administrators must:

  • Create VolumeSnapshot APIs with a minimum version of v6.0.1.
  • Use snapshot-controller and snapshot-validation-webhook with a minimum version of v6.0.1.
  • Use external-provisioner with a minimum version of v3.2.1.
  • Set --prevent-volume-mode-conversion=true flag in snapshot-controller, snapshot-validation-webhook and external-provisioner.

For more information about how to use the feature, visit the Kubernetes blog page.

Cross-namespace storage data sources

Status

StatusMin K8s VersionMax K8s Versionexternal-provisioner Version
Alpha1.26-3.4.0+

Overview

By default, a VolumeSnapshot is a namespace-scoped resource while a VolumeSnapshotContent is a cluster-scope resource. Consequently, you can not restore a snapshot from a different namespace than the source.

With that feature enabled, you can specify a namespace attribute in the dataSourceRef. Once Kubernetes checks that access is OK, the new PersistentVolume can populate its data from the storage source specified in another namespace.

See the Kubernetes Enhancement Proposal for more details on the background, design and discussions.

Usage

To enable this feature, cluster administrators must:

  • Install a CRD for ReferenceGrants supplied by the gateway API project
  • Enable the AnyVolumeDataSource and CrossNamespaceVolumeDataSource feature gates for the kube-apiserver and kube-controller-manager
  • Install a CRD for the specific VolumeSnapShot controller
  • Start the CSI Provisioner controller with the argument --feature-gates=CrossNamespaceVolumeDataSource=true
  • Grant the CSI Provisioner with get, list, and watch permissions for referencegrants (API group gateway.networking.k8s.io)
  • Install the CSI driver

For more information about how to use the feature, visit the Kubernetes blog page.

Deploying CSI Driver on Kubernetes

This page describes to CSI driver developers how to deploy their driver onto a Kubernetes cluster.

Overview

A CSI driver is typically deployed in Kubernetes as two components: a controller component and a per-node component.

Controller Plugin

The controller component can be deployed as a Deployment or StatefulSet on any node in the cluster. It consists of the CSI driver that implements the CSI Controller service and one or more sidecar containers. These controller sidecar containers typically interact with Kubernetes objects and make calls to the driver's CSI Controller service.

It generally does not need direct access to the host and can perform all its operations through the Kubernetes API and external control plane services. Multiple copies of the controller component can be deployed for HA, however it is recommended to use leader election to ensure there is only one active controller at a time.

Controller sidecars include the external-provisioner, external-attacher, external-snapshotter, and external-resizer. Including a sidecar in the deployment may be optional. See each sidecar's page for more details.

Communication with Sidecars

sidecar-container

Sidecar containers manage Kubernetes events and make the appropriate calls to the CSI driver. The calls are made by sharing a UNIX domain socket through an emptyDir volume between the sidecars and CSI Driver.

RBAC Rules

Most controller sidecars interact with Kubernetes objects and therefore need to set RBAC policies. Each sidecar repository contains example RBAC configurations.

Node Plugin

The node component should be deployed on every node in the cluster through a DaemonSet. It consists of the CSI driver that implements the CSI Node service and the node-driver-registrar sidecar container.

Communication with Kubelet

kubelet

The Kubernetes kubelet runs on every node and is responsible for making the CSI Node service calls. These calls mount and unmount the storage volume from the storage system, making it available to the Pod to consume. Kubelet makes calls to the CSI driver through a UNIX domain socket shared on the host via a HostPath volume. There is also a second UNIX domain socket that the node-driver-registrar uses to register the CSI driver to kubelet.

Driver Volume Mounts

The node plugin needs direct access to the host for making block devices and/or filesystem mounts available to the Kubernetes kubelet.

The mount point used by the CSI driver must be set to Bidirectional to allow Kubelet on the host to see mounts created by the CSI driver container. See the example below:

      containers:
      - name: my-csi-driver
        ...
        volumeMounts:
        - name: socket-dir
          mountPath: /csi
        - name: mountpoint-dir
          mountPath: /var/lib/kubelet/pods
          mountPropagation: "Bidirectional"
      - name: node-driver-registrar
        ...
        volumeMounts:
        - name: registration-dir
          mountPath: /registration
      volumes:
      # This volume is where the socket for kubelet->driver communication is done
      - name: socket-dir
        hostPath:
          path: /var/lib/kubelet/plugins/<driver-name>
          type: DirectoryOrCreate
      # This volume is where the driver mounts volumes
      - name: mountpoint-dir
        hostPath:
          path: /var/lib/kubelet/pods
          type: Directory
      # This volume is where the node-driver-registrar registers the plugin
      # with kubelet
      - name: registration-dir
        hostPath:
          path: /var/lib/kubelet/plugins_registry
          type: Directory

Deploying

Deploying a CSI driver onto Kubernetes is highlighted in detail in Recommended Mechanism for Deploying CSI Drivers on Kubernetes.

Enable privileged Pods

To use CSI drivers, your Kubernetes cluster must allow privileged pods (i.e. --allow-privileged flag must be set to true for both the API server and the kubelet). This is the default in some environments (e.g. GCE, GKE, kubeadm).

Ensure your API server are started with the privileged flag:

$ ./kube-apiserver ...  --allow-privileged=true ...
$ ./kubelet ...  --allow-privileged=true ...

Note: Starting from Kubernetes 1.13.0, --allow-privileged is true for kubelet. It'll be deprecated in future kubernetes releases.

Enabling mount propagation

Another feature that CSI depends on is mount propagation. It allows the sharing of volumes mounted by one container with other containers in the same pod, or even to other pods on the same node. For mount propagation to work, the Docker daemon for the cluster must allow shared mounts. See the mount propagation docs to find out how to enable this feature for your cluster. This page explains how to check if shared mounts are enabled and how to configure Docker for shared mounts.

Examples

  • Simple deployment example using a single pod for all components: see the hostpath example.
  • Full deployment example using a DaemonSet for the node plugin and StatefulSet for the controller plugin: TODO

More information

For more information, please read CSI Volume Plugins in Kubernetes Design Doc.

Example

The Hostpath CSI driver is a simple sample driver that provisions a directory on the host. It can be used as an example to get started writing a driver, however it is not meant for production use. The deployment example shows how to deploy and use that driver in Kubernetes.

The example deployment uses the original RBAC rule files that are maintained together with sidecar apps and deploys into the default namespace. A real production should copy the RBAC files and customize them as explained in the comments of those files.

If you encounter any problems, please check the Troubleshooting page.

Testing

This section describes how CSI developers can test their CSI drivers.

Unit Testing

The CSI sanity package from csi-test can be used for unit testing your CSI driver.

It contains a set of basic tests that all CSI drivers should pass (for example, NodePublishVolume should fail when no volume id is provided, etc.).

This package can be used in two modes:

  • Via a Golang test framework (sanity package is imported as a dependency)
  • Via a command line against your driver binary.

Read the documentation of the sanity package for more details.

Functional Testing

Drivers should be functionally "end-to-end" tested while deployed in a Kubernetes cluster. Previously, how to do this and what tests to run was left up to driver authors. Now, a standard set of Kubernetes CSI end-to-end tests can be imported and run by third party CSI drivers. This documentation specifies how to do so.

The CSI community is also looking in to establishing an official "CSI Conformance Suite" to recognize "officially certified CSI drivers". This documentation will be updated with more information once that process has been defined.

Kubernetes End to End Testing for CSI Storage Plugins

Currently, csi-sanity exists to help test compliance with the CSI spec, but e2e testing of plugins is needed as well to provide plugin authors and users validation that their plugin is integrated well with specific versions of Kubernetes.

Setting up End to End tests for your CSI Plugin

Prerequisites:

  • A Kubernetes v1.13+ Cluster
  • Kubectl

There are two ways to run end-to-end tests for your CSI Plugin

  1. use Kubernetes E2E Tests, by providing a DriverDefinition YAML file via a parameter.
  • Note: In some cases you would not be able to use this method, in running e2e tests by just providing a YAML file defining your CSI plugin. For example the NFS CSI plugin currently does not support dynamic provisioning, so we would want to skip those and run only pre-provisioned tests. For such cases, you would need to write your own testdriver, which is discussed below.
  1. import the in-tree storage tests and run them using go test.

This doc will cover how to run the E2E tests using the second method.

Importing the E2E test suite as a library

In-tree storage e2e tests could be used to test CSI storage plugins. Your repo should be setup similar to how the NFS CSI plugin is setup, where the testfiles are in a test directory and the main test file is in the cmd directory.

To be able to import Kubernetes in-tree storage tests, the CSI plugin would need to use Kubernetes v1.14+ (add to plugin's GoPkg.toml, since pluggable E2E tests become available in v1.14). CSI plugin authors would also be required to implement a testdriver for their CSI plugin. The testdriver provides required functionality that would help setup testcases for a particular plugin.

For any testdriver these functions would be required (Since it implements the TestDriver Interface):

  • GetDriverInfo() *testsuites.DriverInfo
  • SkipUnsupportedTest(pattern testpatterns.TestPattern)
  • PrepareTest(f *framework.Framework) (*testsuites.PerTestConfig, func())

The PrepareTest method is where you would write code to setup your CSI plugin, and it would be called before each test case. It is recommended that you don't deploy your plugin in this method, and rather deploy it manually before running your tests.

GetDriverInfo will return a DriverInfo object that has all of the plugin's capabilities and required information. This object helps tests find the deployed plugin, and also decides which tests should run (depending on the plugin's capabilities).

Here are examples of the NFS and Hostpath DriverInfo objects:

testsuites.DriverInfo{
			Name:        "csi-nfsplugin",
			MaxFileSize: testpatterns.FileSizeLarge,
			SupportedFsType: sets.NewString(
				"", // Default fsType
			),
			Capabilities: map[testsuites.Capability]bool{
				testsuites.CapPersistence: true,
				testsuites.CapExec:        true,
			},
}
testsuites.DriverInfo{
			Name:        "csi-hostpath",
			FeatureTag:  "",
			MaxFileSize: testpatterns.FileSizeMedium,
			SupportedFsType: sets.NewString(
				"", // Default fsType
			),
			Capabilities: map[testsuites.Capability]bool{
				testsuites.CapPersistence: true,
			},
}

You would define something similar for your CSI plugin.

SkipUnsupportedTest simply skips any tests that you define there.

Depending on your plugin's specs, you would implement other interfaces defined here. For example the NFS testdriver also implements PreprovisionedVolumeTestDriver and PreprovisionedPVTestDriver interfaces, to enable pre-provisioned tests.

After implementing the testdriver for your CSI plugin, you would create a csi-volumes.go file, where the implemented testdriver is used to run in-tree storage testsuites, similar to how the NFS CSI plugin does so. This is where you would define which testsuites you would want to run for your plugin. All available in-tree testsuites can be found here.

Finally, importing the test package into your main test file will initialize the testsuites to run the E2E tests.

The NFS plugin creates a binary to run E2E tests, but you could use go test instead to run E2E tests using a command like this:

go test -v <main test file> -ginkgo.v -ginkgo.progress --kubeconfig=<kubeconfig file> -timeout=0

Drivers

The following are a set of CSI driver which can be used with Kubernetes:

NOTE: If you would like your driver to be added to this table, please open a pull request in this repo updating this file. Other Features is allowed to be filled in Raw Block, Snapshot, Expansion, Cloning and Topology. If driver did not implement any Other Features, please leave it blank.

DISCLAIMER: Information in this table has not been validated by Kubernetes SIG-Storage. Users who want to use these CSI drivers need to contact driver maintainers for driver capabilities.

Production Drivers

NameCSI Driver NameCompatible with CSI Version(s)DescriptionPersistence (Beyond Pod Lifetime)Supported Access ModesDynamic ProvisioningOther Features
Alicloud Diskdiskplugin.csi.alibabacloud.comv1.0A Container Storage Interface (CSI) Driver for Alicloud DiskPersistentRead/Write Single PodYesRaw Block, Snapshot
Alicloud NASnasplugin.csi.alibabacloud.comv1.0A Container Storage Interface (CSI) Driver for Alicloud Network Attached Storage (NAS)PersistentRead/Write Multiple PodsNo
Alicloud OSSossplugin.csi.alibabacloud.comv1.0A Container Storage Interface (CSI) Driver for Alicloud Object Storage Service (OSS)PersistentRead/Write Multiple PodsNo
Alluxiocsi.alluxio.comv1.0A Container Storage Interface (CSI) Driver for Alluxio File System)PersistentRead/Write Multiple PodsYes
ArStor CSIarstor.csi.huayun.iov1.0A Container Storage Interface (CSI) Driver for Huayun Storage Service (ArStor)Persistent and EphemeralRead/Write Single PodYesRaw Block, Snapshot, Expansion, Cloning
AWS Elastic Block Storageebs.csi.aws.comv0.3, v1.0A Container Storage Interface (CSI) Driver for AWS Elastic Block Storage (EBS)PersistentRead/Write Single PodYesRaw Block, Snapshot, Expansion
AWS Elastic File Systemefs.csi.aws.comv0.3, v1.0A Container Storage Interface (CSI) Driver for AWS Elastic File System (EFS)PersistentRead/Write Multiple PodsYes
AWS FSx for Lustrefsx.csi.aws.comv0.3, v1.0A Container Storage Interface (CSI) Driver for AWS FSx for Lustre (EBS)PersistentRead/Write Multiple PodsYes
Azure Blobblob.csi.azure.comv1.0A Container Storage Interface (CSI) Driver for Azure Blob storagePersistentRead/Write Multiple PodsYesExpansion
Azure Diskdisk.csi.azure.comv1.0A Container Storage Interface (CSI) Driver for Azure DiskPersistentRead/Write Single PodYesRaw Block, Snapshot, Expansion, Cloning, Topology
Azure Filefile.csi.azure.comv1.0A Container Storage Interface (CSI) Driver for Azure FilePersistentRead/Write Multiple PodsYesExpansion
BeeGFSbeegfs.csi.netapp.comv1.3A Container Storage Interface (CSI) Driver for the BeeGFS Parallel File SystemPersistentRead/Write Multiple PodsYes
Bigtera VirtualStor (block)csi.block.bigtera.comv0.3, v1.0.0, v1.1.0A Container Storage Interface (CSI) Driver for Bigtera VirtualStor block storagePersistentRead/Write Single PodYesRaw Block, Snapshot, Expansion
Bigtera VirtualStor (filesystem)csi.fs.bigtera.comv0.3, v1.0.0, v1.1.0A Container Storage Interface (CSI) Driver for Bigtera VirtualStor filesystemPersistentRead/Write Multiple PodsYesExpansion
BizFlyCloud Block Storagevolume.csi.bizflycloud.vnv1.2A Container Storage Interface (CSI) Driver for BizFly Cloud block storagePersistentRead/Write Single PodYesRaw Block, Snapshot, Expansion
CephFScephfs.csi.ceph.comv0.3, >=v1.0.0A Container Storage Interface (CSI) Driver for CephFSPersistentRead/Write Multiple PodsYesExpansion, Snapshot, Cloning
Ceph RBDrbd.csi.ceph.comv0.3, >=v1.0.0A Container Storage Interface (CSI) Driver for Ceph RBDPersistentRead/Write Single PodYesRaw Block, Snapshot, Expansion, Topology, Cloning,In-tree plugin migration
Cisco HyperFlex CSIHX-CSIv1.2A Container Storage Interface (CSI) Driver for Cisco HyperFlexPersistentRead/Write Multiple PodsYesRaw Block, Expansion, Cloning
CubeFScsi.cubefs.comv1.1.0A Container Storage Interface (CSI) Driver for CubeFS StoragePersistentRead/Write Multiple PodsYes
Cindercinder.csi.openstack.orgv0.3, [v1.0, v1.3.0]A Container Storage Interface (CSI) Driver for OpenStack CinderPersistent and EphemeralDepends on the storage backend usedYes, if storage backend supports itRaw Block, Snapshot, Expansion, Cloning, Topology
cloudscale.chcsi.cloudscale.chv1.0A Container Storage Interface (CSI) Driver for the cloudscale.ch IaaS platformPersistentRead/Write Single PodYesSnapshot
CTDI Block Storagecsi.block.ctdi.comv1.0 to v1.6A Container Storage Interface (CSI) Driver for CTDI Distributed Block StoragePersistentRead/Write Single PodYesRaw Block, Snapshot, Expansion, Cloning
Datatom-InfinityCSIcsi-infiblock-pluginv0.3, v1.0.0, v1.1.0A Container Storage Interface (CSI) Driver for DATATOM Infinity storagePersistentRead/Write Single PodYesRaw Block, Snapshot, Expansion, Topology
Datatom-InfinityCSI (filesystem)csi-infifs-pluginv0.3, v1.0.0, v1.1.0A Container Storage Interface (CSI) Driver for DATATOM Infinity filesystem storagePersistentRead/Write Multiple PodsYesExpansion
Dateradsp.csi.daterainc.iov1.0A Container Storage Interface (CSI) Driver for Datera Data Services Platform (DSP)PersistentRead/Write Single PodYesSnapshot
DDN EXAScalerexa.csi.ddn.comv1.0, v1.1A Container Storage Interface (CSI) Driver for DDN EXAScaler filesystemsPersistentRead/Write Multiple PodsYesExpansion
Dell EMC PowerMaxcsi-powermax.dellemc.com[v1.0, v1.5]A Container Storage Interface (CSI) Driver for Dell EMC PowerMaxPersistentRead/Write Single PodYesRaw Block, Snapshot, Expansion, Cloning, Topology
Dell EMC PowerScalecsi-isilon.dellemc.com[v1.0, v1.5]A Container Storage Interface (CSI) Driver for Dell EMC PowerScalePersistent and EphemeralRead/Write Multiple PodsYesSnapshot, Expansion, Cloning, Topology
Dell EMC PowerStorecsi-powerstore.dellemc.com[v1.0, v1.5]A Container Storage Interface (CSI) Driver for Dell EMC PowerStorePersistent and EphemeralRead/Write Single PodYesRaw Block, Snapshot, Expansion, Cloning, Topology
Dell EMC Unitycsi-unity.dellemc.com[v1.0, v1.5]A Container Storage Interface (CSI) Driver for Dell EMC UnityPersistent and EphemeralRead/Write Single PodYesRaw Block, Snapshot, Expansion, Cloning, Topology
Dell EMC VxFlexOScsi-vxflexos.dellemc.com[v1.0, v1.5]A Container Storage Interface (CSI) Driver for Dell EMC VxFlexOSPersistentRead/Write Single PodYesRaw Block, Snapshot, Expansion, Cloning, Topology
democratic-csiorg.democratic-csi.[X][v1.0, v1.5]Generic CSI plugin supporting zfs based solutions (FreeNAS / TrueNAS and ZoL solutions such as Ubuntu), Synology, and morePersistent and EphemeralRead/Write Single Pod (Block Volume)

Read/Write Multiple Pods (File Volume)
YesRaw Block, Snapshot, Expansion, Cloning
Diamanti-CSIdcx.csi.diamanti.comv1.0A Container Storage Interface (CSI) Driver for Diamanti DCX PlatformPersistentRead/Write Single PodYesRaw Block, Snapshot, Expansion
DigitalOcean Block Storagedobs.csi.digitalocean.comv0.3, v1.0A Container Storage Interface (CSI) Driver for DigitalOcean Block StoragePersistentRead/Write Single PodYesRaw Block, Snapshot, Expansion
Dothill-CSIdothill.csi.enix.iov1.3Generic CSI plugin supporting Seagate AssuredSan appliances such as HPE MSA, Dell EMC PowerVault ME4 and others ...PersistentRead/Write Single NodeYesSnapshot, Expansion
Ember CSI[x].ember-csi.iov0.2, v0.3, v1.0Multi-vendor CSI plugin supporting over 80 Drivers to provide block and mount storage to Container Orchestration systems.PersistentRead/Write Single PodYesRaw Block, Snapshot
Excelero NVMeshnvmesh-csi.excelero.comv1.0, v1.1A Container Storage Interface (CSI) Driver for Excelero NVMeshPersistentRead/Write Multiple PodsYesRaw Block, Expansion
Exoscale CSIcsi.exoscale.comv1.8.0A Container Storage Interface (CSI) Driver for Exoscale Block StoragePersistentRead/Write Single PodYesRaw Block, Snapshot, Topology
GCE Persistent Diskpd.csi.storage.gke.iov0.3, v1.0A Container Storage Interface (CSI) Driver for Google Compute Engine Persistent Disk (GCE PD)PersistentRead/Write Single PodYesRaw Block, Snapshot, Expansion, Topology
Google Cloud Filestorefilestore.csi.storage.gke.iov0.3A Container Storage Interface (CSI) Driver for Google Cloud FilestorePersistentRead/Write Multiple PodsYes
Google Cloud Storage FUSEgcsfuse.csi.storage.gke.iov1.xA Container Storage Interface (CSI) Driver for Google Cloud Storage FUSEPersistent and EphemeralRead/Write Multiple PodsNo
Google Cloud Storagegcs.csi.ofek.devv1.0A Container Storage Interface (CSI) Driver for Google Cloud StoragePersistent and EphemeralRead/Write Multiple PodsYesExpansion
GlusterFSorg.gluster.glusterfsv0.3, v1.0A Container Storage Interface (CSI) Driver for GlusterFSPersistentRead/Write Multiple PodsYesSnapshot
Gluster VirtBlockorg.gluster.glustervirtblockv0.3, v1.0A Container Storage Interface (CSI) Driver for Gluster Virtual Block volumesPersistentRead/Write Single PodYes
Hammerspace CSIcom.hammerspace.csiv0.3, v1.0A Container Storage Interface (CSI) Driver for Hammerspace StoragePersistentRead/Write Multiple PodsYesRaw Block, Snapshot
Hedvigio.hedvig.csiv1.0A Container Storage Interface (CSI) Driver for HedvigPersistentRead/Write Multiple PodsYesRaw Block, Snapshot, Expansion
Hetzner Cloud Volumes CSIcsi.hetzner.cloudv0.3, v1.0A Container Storage Interface (CSI) Driver for Hetzner Cloud VolumesPersistentRead/Write Single PodYesRaw Block, Expansion
Hitachi Vantarahspc.csi.hitachi.comv1.2A Container Storage Interface (CSI) Driver for VSP series StoragePersistentRead/Write Single PodYesRaw Block, Snapshot, Expansion, Cloning
HPEcsi.hpe.comv1.3A multi-platform Container Storage Interface (CSI) driver. Supports HPE Alletra, Nimble Storage, Primera and 3PARPersistent and EphemeralRead/Write Multiple PodsYesRaw Block, Snapshot, Expansion, Cloning
HPE ClusterStor Lustre CSIlustre-csi.hpe.comv1.5A Container Storage Interface (CSI) Driver for HPE Cray ClusterStor Lustre StoragePersistentRead/Write Multiple PodsNo
HPE Ezmeral (MapR)com.mapr.csi-kdfv1.3A Container Storage Interface (CSI) Driver for HPE Ezmeral Data FabricPersistentRead/Write Multiple PodsYesRaw Block, Snapshot, Expansion, Cloning
Huawei Storage CSIcsi.huawei.comv1.0, v1.1, v1.2A Container Storage Interface (CSI) Driver for FusionStorage, OceanStor 100D, OceanStor Pacific, OceanStor Dorado V3, OceanStor Dorado V6, OceanStor V3, OceanStor V5PersistentRead/Write Multiple PodYesSnapshot, Expansion, Cloning
HwameiStorlvm.hwameistor.io disk.hwameistor.io v1.3A Container Storage Interface (CSI) Driver for Local StoragePersistentRead/Write Single PodYesRaw Block, Expansion
HyperV CSIeu.zetanova.csi.hypervv1.0, v1.1A Container Storage Interface (CSI) driver to manage hyperv hostsPersistentRead/Write Multiple PodsYes
IBM Block Storageblock.csi.ibm.com[v1.0, v1.5]A Container Storage Interface (CSI) Driver for IBM Spectrum Virtualize Family, IBM FlashSystem A9000 and A9000R, IBM DS8000 Family 8.x and higher.PersistentRead/Write Single PodYesRaw Block, Snapshot, Expansion, Cloning, Topology
IBM Storage Scalespectrumscale.csi.ibm.comv1.5A Container Storage Interface (CSI) Driver for the IBM Storage Scale File SystemPersistentRead/Write Multiple PodYesSnapshot, Expansion, Cloning
IBM Cloud Block Storage VPC CSI Drivervpc.block.csi.ibm.iov1.5A Container Storage Interface (CSI) Driver for IBM Cloud Kubernetes Service and Red Hat OpenShift on IBM CloudPersistentRead/Write Single PodYesRaw Block, Expansion, Snapshot
Infinidatinfinibox-csi-driverv1.0, v1.8A Container Storage Interface (CSI) Driver for Infinidat InfiniBoxPersistentRead/Write Multiple PodsYesRaw Block, Snapshot, Expansion, Cloning, Topology
Inspur InStorage CSIcsi-instorage[v1.0, v1.6]A Container Storage Interface (CSI) Driver for inspur AS/HF/CS/CF Series Primary Storage, inspur AS13000 SAN/NAS/Object Series SDS StoragePersistentRead/Write Single PodYesRaw Block, Snapshot, Expansion, Cloning
Intel PMEM-CSIpmem-csi.intel.comv1.0A Container Storage Interface (CSI) driver for PMEM from IntelPersistent and EphemeralRead/Write Single PodYesRaw Block
Intelliflash Block Storageintelliflash-csi-block-driver.intelliflash.comv1.0, v1.1, v1.2A Container Storage Interface (CSI) Driver for Intelliflash Block StoragePersistentRead/Write Multiple PodsYesSnapshot, Expansion, Cloning, Topology
Intelliflash File Storageintelliflash-csi-file-driver.intelliflash.comv1.0, v1.1, v1.2A Container Storage Interface (CSI) Driver for Intelliflash File StoragePersistentRead/Write Multiple PodsYesSnapshot, Expansion, Cloning, Topology
ionir ionirv1.2A Container Storage Interface (CSI) Driver for ionir Kubernetes-Native StoragePersistentRead/Write Single PodYesRaw Block, Cloning
JD Cloud Storage Platform Blockjdcsp-block.csi.jdcloud.comv1.8.0A Container Storage Interface (CSI) Driver for JD-CSP BlockPersistentRead/Write Single PodYesRaw Block, Snapshot, Expansion
JD Cloud Storage Platform Filesystemjdcsp-file.csi.jdcloud.comv1.8.0A Container Storage Interface (CSI) Driver for JD-CSP FilesystemPersistentRead/Write Multiple PodsYesExpansion
JuiceFScsi.juicefs.comv0.3, v1.0A Container Storage Interface (CSI) Driver for JuiceFS File SystemPersistentRead/Write Multiple PodsYes
kaDaluorg.kadalu.glusterv0.3A CSI Driver (and operator) for GlusterFSPersistentRead/Write Multiple PodsYes
KaiXiangTech MegaBricflexblock.csi.kaixiangtech.comv1.5.0A Container Storage Interface (CSI) plugin for KaiXiangTech MegaBric StoragePersistentRead/Write Multiple PodsYesRaw Block, Expansion, Cloning
KumoScale Block Storagekumoscale.kioxia.comv1.0A Container Storage Interface (CSI) Driver for KumoScale Block StoragePersistentRead/Write Single PodYesRaw Block, Snapshot, Expansion, Topology
Lightbits Labscsi.lightbitslabs.comv1.2, v1.3A Container Storage Interface (CSI) Driver for Lightbits StoragePersistentRead/Write Single Pod (in volumeMode FileSystem) Read/Write Multiple Pods (in volumeMode Block)YesRaw Block, Snapshot, Expansion, Cloning
Linode Block Storagelinodebs.csi.linode.comv1.0A Container Storage Interface (CSI) Driver for Linode Block StoragePersistentRead/Write Single PodYes
LINSTORlinstor.csi.linbit.comv1.2A Container Storage Interface (CSI) Driver for LINSTOR volumesPersistentRead/Write Single PodYesRaw Block, Snapshot, Expansion, Cloning, Topology
Longhorndriver.longhorn.iov1.5A Container Storage Interface (CSI) Driver for Longhorn volumesPersistentRead/Write Single NodeYesRaw Block
MacroSANcsi-macrosanv1.0A Container Storage Interface (CSI) Driver for MacroSAN Block StoragePersistentRead/Write Single PodYes
Manilamanila.csi.openstack.orgv1.1, v1.2A Container Storage Interface (CSI) Driver for OpenStack Shared File System Service (Manila)PersistentRead/Write Multiple PodsYesSnapshot, Topology
MooseFScom.tuxera.csi.moosefsv1.0A Container Storage Interface (CSI) Driver for MooseFS clusters.PersistentRead/Write Multiple PodsYes
NetAppcsi.trident.netapp.io[v1.0, v1.8]A Container Storage Interface (CSI) Driver for NetApp's Trident container storage orchestratorPersistentRead/Write Multiple PodsYesRaw Block, Snapshot, Expansion, Cloning, Topology
NexentaStor File Storagenexentastor-csi-driver.nexenta.comv1.0, v1.1, v1.2A Container Storage Interface (CSI) Driver for NexentaStor File StoragePersistentRead/Write Multiple PodsYesSnapshot, Expansion, Cloning, Topology
NexentaStor Block Storagenexentastor-block-csi-driver.nexenta.comv1.0, v1.1, v1.2A Container Storage Interface (CSI) Driver for NexentaStor over iSCSI protocolPersistentRead/Write Multiple PodsYesSnapshot, Expansion, Cloning, Topology, Raw block
NFSnfs.csi.k8s.iov1.0This driver allows Kubernetes to access NFS server on Linux node.PersistentRead/Write Multiple PodsYes
NGX Storage Block Storageiscsi.csi.ngxstorage.comv1.8.0A Container Storage Interface (CSI) Driver for NGXStorage over iSCSI protocolPersistentRead/Write Single PodYesRaw Block, Expansion, Snapshot
Nutanixcsi.nutanix.comv0.3, v1.0, v1.2A Container Storage Interface (CSI) Driver for NutanixPersistent"Read/Write Single Pod" with Nutanix Volumes and "Read/Write Multiple Pods" with Nutanix FilesYesRaw Block, Snapshot, Expansion, Cloning
OpenEBScstor.csi.openebs.iov1.0A Container Storage Interface (CSI) Driver for OpenEBSPersistentRead/Write Single PodYesExpansion, Snapshot, Cloning
Open-Ecom.open-e.joviandss.csiv1.0A Container Storage Interface (CSI) Driver for Open-E JovianDSS StoragePersistentRead/Write Single PodYesSnapshot, Cloning
Open-Locallocal.csi.alibaba.comv1.0A Container Storage Interface (CSI) Driver for Local StoragePersistentRead/Write Single PodYesRaw Block, Expansion, Snapshot
Oracle Cloud Infrastructure(OCI) Block Storageblockvolume.csi.oraclecloud.comv1.1A Container Storage Interface (CSI) Driver for Oracle Cloud Infrastructure (OCI) Block StoragePersistentRead/Write Single PodYesTopology
oVirtcsi.ovirt.orgv1.0A Container Storage Interface (CSI) Driver for oVirtPersistentRead/Write Single PodYesBlock, File Storage
Portworxpxd.portworx.comv1.4A Container Storage Interface (CSI) Driver for PortworxPersistent and EphemeralRead/Write Multiple PodsYesSnapshot, Expansion, Raw Block, Cloning
Proxmoxcsi.proxmox.sinextra.devv1.9A Container Storage Interface (CSI) Driver for ProxmoxPersistentRead/Write Single PodYesExpansion, Topology, Raw Block
Pure Storage CSIpure-csi[v1.0, v1.3]A Container Storage Interface (CSI) Driver for Pure Storage's Pure Service OrchestratorPersistent and EphemeralRead/Write Multiple PodsYesSnapshot, Cloning, Raw Block, Topology, Expansion
QingCloud CSIdisk.csi.qingcloud.comv1.1A Container Storage Interface (CSI) Driver for QingCloud Block StoragePersistentRead/Write Single PodYesRaw Block, Snapshot, Expansion, Cloning
QingStor CSIneonsan.csi.qingstor.comv0.3, v1.1A Container Storage Interface (CSI) Driver for NeonSAN storage systemPersistentRead/Write Multiple PodsYesRaw Block, Snapshot, Expansion, Cloning
Qiniu Kodo CSIkodoplugin.storage.qiniu.comv1.6A Container Storage Interface (CSI) Driver for Qiniu Object Storage (Kodo)PersistentRead/Write Multiple PodsYes
Quobytequobyte-csiv1.3.0A Container Storage Interface (CSI) Driver for QuobytePersistentRead/Write Multiple PodsYesExpansion, Snapshots
ROBINrobinv0.3, v1.0A Container Storage Interface (CSI) Driver for ROBINPersistentRead/Write Multiple PodsYesRaw Block, Snapshot, Expansion, Cloning
SandStonecsi-sandstone-pluginv1.0A Container Storage Interface (CSI) Driver for SandStone USPPersistentRead/Write Multiple PodsYesRaw Block, Snapshot, Expansion, Cloning
Sangfor-EDS-File-Storageeds.csi.file.sangfor.comv1.0A Container Storage Interface (CSI) Driver for Sangfor Distributed File Storage(EDS)PersistentRead/Write Multiple PodsYes
Sangfor-EDS-Block-Storageeds.csi.block.sangfor.comv1.0A Container Storage Interface (CSI) Driver for Sangfor Block Storage(EDS)PersistentRead/Write Single PodYes
Scaleway CSIcsi.scaleway.comv1.2.0Container Storage Interface (CSI) Driver for Scaleway Block StoragePersistentRead/Write Single PodYesRaw Block, Snapshot, Expansion, Topology
Seagate Exos Xcsi-exos-x.seagate.comv1.3CSI driver for Seagate Exos X and OEM systemsPersistentRead/Write Single PodYesSnapshot, Expansion, Cloning
SeaweedFSseaweedfs-csi-driverv1.0A Container Storage Interface (CSI Driver for SeaweedFS)PersistentRead/Write Multiple PodsYes
Secrets Store CSI Driversecrets-store.csi.k8s.iov0.0.10A Container Storage Interface (CSI) Driver for mounting secrets, keys, and certs stored in enterprise-grade external secrets stores as volumes.EphemeralN/AN/A
SmartXcsi-smtx-pluginv1.0A Container Storage Interface (CSI) Driver for SmartX ZBS StoragePersistentRead/Write Multiple PodsYesSnapshot, Expansion
SMBsmb.csi.k8s.iov1.0This driver allows Kubernetes to access SMB Server on both Linux and Windows nodesPersistentRead/Write Multiple PodsYes
SODAcsi-soda-pluginv1.0A Container Storage Interface (CSI) Driver for SODAPersistentRead/Write Single PodYesRaw Block, Snapshot
SPDK-CSIcsi.spdk.iov1.1A Container Storage Interface (CSI) Driver for SPDKPersistent and EphemeralRead/Write Single PodYes
StorageOSstorageosv0.3, v1.0A Container Storage Interface (CSI) Driver for StorageOSPersistentRead/Write Multiple PodsYes
Storidgecsi.cio.storidge.comv0.3, v1.0A Container Storage Interface (CSI) Driver for Storidge CIOPersistentRead/Write Multiple PodsYesSnapshot, Expansion
StorPoolcsi-driver.storpool.comv1.0A Container Storage Interface (CSI) Driver for StorPoolPersistent and EphemeralRead/Write Multiple PodsYesExpansion
Synologycsi.san.synology.comv1.0A Container Storage Interface (CSI) Driver for Synology NASPersistentRead/Write Multiple PodsYesSnapshot, Expansion, Cloning
Tencent Cloud Block Storagecom.tencent.cloud.csi.cbsv1.0A Container Storage Interface (CSI) Driver for Tencent Cloud Block StoragePersistentRead/Write Single PodYesSnapshot
Tencent Cloud File Storagecom.tencent.cloud.csi.cfsv1.0A Container Storage Interface (CSI) Driver for Tencent Cloud File StoragePersistentRead/Write Multiple PodsYes
Tencent Cloud Object Storagecom.tencent.cloud.csi.cosfsv1.0A Container Storage Interface (CSI) Driver for Tencent Cloud Object StoragePersistentRead/Write Multiple PodsNo
TopoLVMtopolvm.iov1.1A Container Storage Interface (CSI) Driver for LVMPersistentRead/Write Single NodeYesRaw Block, Expansion, Topology, Snapshot, Cloning, Storage Capacity Tracking
Toyou CSIcsi.toyou.comv1.9A Container Storage Interface (CSI) Driver for Toyou StoragePersistentRead/Write Multiple PodsYes
TrueNAScsi.hpe.comv1.3A community supported Container Storage Provider (CSP) that leverages the HPE CSI Driver for Kubernetes. Works with TrueNAS CORE, TrueNAS SCALE and FreeNAS using iSCSI onlyPersistentRead/Write Multiple PodsYesRaw Block, Snapshot, Expansion, Cloning
VAST Datacsi.vastdata.comv1.2A Container Storage Interface (CSI) Driver for VAST DataPersistent and EphemeralRead/Write Multiple PodsYesSnapshot, Expansion
XSKY-EBScsi.block.xsky.comv1.0A Container Storage Interface (CSI) Driver for XSKY Distributed Block Storage (X-EBS)PersistentRead/Write Single PodYesRaw Block, Snapshot, Expansion, Cloning
XSKY-FScsi.fs.xsky.comv1.0A Container Storage Interface (CSI) Driver for XEDP,XEUS,XUDS,XGFS,X3000,X5000PersistentRead/Write Multiple PodsYesSnapshot, Expansion
Vaultsecrets.csi.kubevault.comv1.0A Container Storage Interface (CSI) Driver for mounting HashiCorp Vault secrets as volumes.EphemeralN/AN/A
VDAcsi.vda.iov1.0An open source block storage system base on SPDKPersistentRead/Write Single PodN/A
Veritas InfoScale Volumesorg.veritas.infoscalev1.2A Container Storage Interface (CSI) Driver for Veritas InfoScale volumesPersistentRead/Write Multiple PodsYesSnapshot, Expansion, Cloning
vSpherecsi.vsphere.vmware.comv1.4A Container Storage Interface (CSI) Driver for VMware vSpherePersistentRead/Write Single Pod (Block Volume)

Read/Write Multiple Pods (File Volume)
YesRaw Block,

Expansion (Block Volume),

Topology Aware (Block Volume),

Snapshot (Block Volume)
Vultr Block Storageblock.csi.vultr.comv1.2A Container Storage Interface (CSI) Driver for Vultr Block StoragePersistentRead/Write Single PodYes
WekaIOcsi.weka.iov1.0A Container Storage Interface (CSI) Driver for mounting WekaIO WekaFS filesystem as volumesPersistentRead/Write Multiple PodsYes
Yandex.Cloudyandex.csi.flant.comv1.2A Container Storage Interface (CSI) plugin for Yandex.Cloud Compute DisksPersistentRead/Write Single PodYes
YanRongYun?v1.0A Container Storage Interface (CSI) Driver for YanRong YRCloudFile StoragePersistentRead/Write Multiple PodsYes
Zadara-CSIcsi.zadara.comv1.0, v1.1A Container Storage Interface (CSI) plugin for Zadara VPSA Storage Array & VPSA All-FlashPersistentRead/Write Multiple PodsYesRaw Block, Snapshot, Expansion, Cloning

Sample Drivers

NameStatusMore Information
FlexvolumeSample
HostPathv1.2.0Only use for a single node tests. See the Example page for Kubernetes-specific instructions.
ImagePopulatorPrototypeDriver that lets you use a container image as an ephemeral volume.
In-memory Sample Mock Driverv0.3.0The sample mock driver used for csi-sanity
Synology NASv1.0.0An unofficial (and unsupported) Container Storage Interface Driver for Synology NAS.
VFS DriverReleasedA CSI plugin that provides a virtual file system.

API Reference

The following is the list of CSI APIs:

Volume Snapshot

Packages:

snapshot.storage.k8s.io/v1

Resource Types:

VolumeSnapshot

VolumeSnapshot is a user’s request for either creating a point-in-time snapshot of a persistent volume, or binding to a pre-existing snapshot.

Field Description
apiVersion
string
snapshot.storage.k8s.io/v1
kind
string
VolumeSnapshot
metadata
Kubernetes meta/v1.ObjectMeta
(Optional)

Standard object’s metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata

Refer to the Kubernetes API documentation for the fields of the metadata field.
spec
VolumeSnapshotSpec

spec defines the desired characteristics of a snapshot requested by a user. More info: https://kubernetes.io/docs/concepts/storage/volume-snapshots#volumesnapshots Required.



source
VolumeSnapshotSource

source specifies where a snapshot will be created from. This field is immutable after creation. Required.

volumeSnapshotClassName
string
(Optional)

VolumeSnapshotClassName is the name of the VolumeSnapshotClass requested by the VolumeSnapshot. VolumeSnapshotClassName may be left nil to indicate that the default SnapshotClass should be used. A given cluster may have multiple default Volume SnapshotClasses: one default per CSI Driver. If a VolumeSnapshot does not specify a SnapshotClass, VolumeSnapshotSource will be checked to figure out what the associated CSI Driver is, and the default VolumeSnapshotClass associated with that CSI Driver will be used. If more than one VolumeSnapshotClass exist for a given CSI Driver and more than one have been marked as default, CreateSnapshot will fail and generate an event. Empty string is not allowed for this field.

status
VolumeSnapshotStatus
(Optional)

status represents the current information of a snapshot. Consumers must verify binding between VolumeSnapshot and VolumeSnapshotContent objects is successful (by validating that both VolumeSnapshot and VolumeSnapshotContent point at each other) before using this object.

VolumeSnapshotClass

VolumeSnapshotClass specifies parameters that a underlying storage system uses when creating a volume snapshot. A specific VolumeSnapshotClass is used by specifying its name in a VolumeSnapshot object. VolumeSnapshotClasses are non-namespaced

Field Description
apiVersion
string
snapshot.storage.k8s.io/v1
kind
string
VolumeSnapshotClass
metadata
Kubernetes meta/v1.ObjectMeta
(Optional)

Standard object’s metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata

Refer to the Kubernetes API documentation for the fields of the metadata field.
driver
string

driver is the name of the storage driver that handles this VolumeSnapshotClass. Required.

parameters
map[string]string
(Optional)

parameters is a key-value map with storage driver specific parameters for creating snapshots. These values are opaque to Kubernetes.

deletionPolicy
DeletionPolicy

deletionPolicy determines whether a VolumeSnapshotContent created through the VolumeSnapshotClass should be deleted when its bound VolumeSnapshot is deleted. Supported values are “Retain” and “Delete”. “Retain” means that the VolumeSnapshotContent and its physical snapshot on underlying storage system are kept. “Delete” means that the VolumeSnapshotContent and its physical snapshot on underlying storage system are deleted. Required.

VolumeSnapshotContent

VolumeSnapshotContent represents the actual “on-disk” snapshot object in the underlying storage system

Field Description
apiVersion
string
snapshot.storage.k8s.io/v1
kind
string
VolumeSnapshotContent
metadata
Kubernetes meta/v1.ObjectMeta
(Optional)

Standard object’s metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata

Refer to the Kubernetes API documentation for the fields of the metadata field.
spec
VolumeSnapshotContentSpec

spec defines properties of a VolumeSnapshotContent created by the underlying storage system. Required.



volumeSnapshotRef
Kubernetes core/v1.ObjectReference

volumeSnapshotRef specifies the VolumeSnapshot object to which this VolumeSnapshotContent object is bound. VolumeSnapshot.Spec.VolumeSnapshotContentName field must reference to this VolumeSnapshotContent’s name for the bidirectional binding to be valid. For a pre-existing VolumeSnapshotContent object, name and namespace of the VolumeSnapshot object MUST be provided for binding to happen. This field is immutable after creation. Required.

deletionPolicy
DeletionPolicy

deletionPolicy determines whether this VolumeSnapshotContent and its physical snapshot on the underlying storage system should be deleted when its bound VolumeSnapshot is deleted. Supported values are “Retain” and “Delete”. “Retain” means that the VolumeSnapshotContent and its physical snapshot on underlying storage system are kept. “Delete” means that the VolumeSnapshotContent and its physical snapshot on underlying storage system are deleted. For dynamically provisioned snapshots, this field will automatically be filled in by the CSI snapshotter sidecar with the “DeletionPolicy” field defined in the corresponding VolumeSnapshotClass. For pre-existing snapshots, users MUST specify this field when creating the VolumeSnapshotContent object. Required.

driver
string

driver is the name of the CSI driver used to create the physical snapshot on the underlying storage system. This MUST be the same as the name returned by the CSI GetPluginName() call for that driver. Required.

volumeSnapshotClassName
string
(Optional)

name of the VolumeSnapshotClass from which this snapshot was (or will be) created. Note that after provisioning, the VolumeSnapshotClass may be deleted or recreated with different set of values, and as such, should not be referenced post-snapshot creation.

source
VolumeSnapshotContentSource

source specifies whether the snapshot is (or should be) dynamically provisioned or already exists, and just requires a Kubernetes object representation. This field is immutable after creation. Required.

sourceVolumeMode
Kubernetes core/v1.PersistentVolumeMode
(Optional)

SourceVolumeMode is the mode of the volume whose snapshot is taken. Can be either “Filesystem” or “Block”. If not specified, it indicates the source volume’s mode is unknown. This field is immutable. This field is an alpha field.

status
VolumeSnapshotContentStatus
(Optional)

status represents the current information of a snapshot.

DeletionPolicy (string alias)

(Appears on:VolumeSnapshotClass, VolumeSnapshotContentSpec)

DeletionPolicy describes a policy for end-of-life maintenance of volume snapshot contents

Value Description

"Delete"

volumeSnapshotContentDelete means the snapshot will be deleted from the underlying storage system on release from its volume snapshot.

"Retain"

volumeSnapshotContentRetain means the snapshot will be left in its current state on release from its volume snapshot.

VolumeSnapshotContentSource

(Appears on:VolumeSnapshotContentSpec)

VolumeSnapshotContentSource represents the CSI source of a snapshot. Exactly one of its members must be set. Members in VolumeSnapshotContentSource are immutable.

Field Description
volumeHandle
string
(Optional)

volumeHandle specifies the CSI “volume_id” of the volume from which a snapshot should be dynamically taken from. This field is immutable.

snapshotHandle
string
(Optional)

snapshotHandle specifies the CSI “snapshot_id” of a pre-existing snapshot on the underlying storage system for which a Kubernetes object representation was (or should be) created. This field is immutable.

VolumeSnapshotContentSpec

(Appears on:VolumeSnapshotContent)

VolumeSnapshotContentSpec is the specification of a VolumeSnapshotContent

Field Description
volumeSnapshotRef
Kubernetes core/v1.ObjectReference

volumeSnapshotRef specifies the VolumeSnapshot object to which this VolumeSnapshotContent object is bound. VolumeSnapshot.Spec.VolumeSnapshotContentName field must reference to this VolumeSnapshotContent’s name for the bidirectional binding to be valid. For a pre-existing VolumeSnapshotContent object, name and namespace of the VolumeSnapshot object MUST be provided for binding to happen. This field is immutable after creation. Required.

deletionPolicy
DeletionPolicy

deletionPolicy determines whether this VolumeSnapshotContent and its physical snapshot on the underlying storage system should be deleted when its bound VolumeSnapshot is deleted. Supported values are “Retain” and “Delete”. “Retain” means that the VolumeSnapshotContent and its physical snapshot on underlying storage system are kept. “Delete” means that the VolumeSnapshotContent and its physical snapshot on underlying storage system are deleted. For dynamically provisioned snapshots, this field will automatically be filled in by the CSI snapshotter sidecar with the “DeletionPolicy” field defined in the corresponding VolumeSnapshotClass. For pre-existing snapshots, users MUST specify this field when creating the VolumeSnapshotContent object. Required.

driver
string

driver is the name of the CSI driver used to create the physical snapshot on the underlying storage system. This MUST be the same as the name returned by the CSI GetPluginName() call for that driver. Required.

volumeSnapshotClassName
string
(Optional)

name of the VolumeSnapshotClass from which this snapshot was (or will be) created. Note that after provisioning, the VolumeSnapshotClass may be deleted or recreated with different set of values, and as such, should not be referenced post-snapshot creation.

source
VolumeSnapshotContentSource

source specifies whether the snapshot is (or should be) dynamically provisioned or already exists, and just requires a Kubernetes object representation. This field is immutable after creation. Required.

sourceVolumeMode
Kubernetes core/v1.PersistentVolumeMode
(Optional)

SourceVolumeMode is the mode of the volume whose snapshot is taken. Can be either “Filesystem” or “Block”. If not specified, it indicates the source volume’s mode is unknown. This field is immutable. This field is an alpha field.

VolumeSnapshotContentStatus

(Appears on:VolumeSnapshotContent)

VolumeSnapshotContentStatus is the status of a VolumeSnapshotContent object Note that CreationTime, RestoreSize, ReadyToUse, and Error are in both VolumeSnapshotStatus and VolumeSnapshotContentStatus. Fields in VolumeSnapshotStatus are updated based on fields in VolumeSnapshotContentStatus. They are eventual consistency. These fields are duplicate in both objects due to the following reasons: - Fields in VolumeSnapshotContentStatus can be used for filtering when importing a volumesnapshot. - VolumsnapshotStatus is used by end users because they cannot see VolumeSnapshotContent. - CSI snapshotter sidecar is light weight as it only watches VolumeSnapshotContent object, not VolumeSnapshot object.

Field Description
snapshotHandle
string
(Optional)

snapshotHandle is the CSI “snapshot_id” of a snapshot on the underlying storage system. If not specified, it indicates that dynamic snapshot creation has either failed or it is still in progress.

creationTime
int64
(Optional)

creationTime is the timestamp when the point-in-time snapshot is taken by the underlying storage system. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the “creation_time” value returned from CSI “CreateSnapshot” gRPC call. For a pre-existing snapshot, this field will be filled with the “creation_time” value returned from the CSI “ListSnapshots” gRPC call if the driver supports it. If not specified, it indicates the creation time is unknown. The format of this field is a Unix nanoseconds time encoded as an int64. On Unix, the command date +%s%N returns the current time in nanoseconds since 1970-01-01 00:00:00 UTC.

restoreSize
int64
(Optional)

restoreSize represents the complete size of the snapshot in bytes. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the “size_bytes” value returned from CSI “CreateSnapshot” gRPC call. For a pre-existing snapshot, this field will be filled with the “size_bytes” value returned from the CSI “ListSnapshots” gRPC call if the driver supports it. When restoring a volume from this snapshot, the size of the volume MUST NOT be smaller than the restoreSize if it is specified, otherwise the restoration will fail. If not specified, it indicates that the size is unknown.

readyToUse
bool

readyToUse indicates if a snapshot is ready to be used to restore a volume. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the “ready_to_use” value returned from CSI “CreateSnapshot” gRPC call. For a pre-existing snapshot, this field will be filled with the “ready_to_use” value returned from the CSI “ListSnapshots” gRPC call if the driver supports it, otherwise, this field will be set to “True”. If not specified, it means the readiness of a snapshot is unknown.

error
VolumeSnapshotError
(Optional)

error is the last observed error during snapshot creation, if any. Upon success after retry, this error field will be cleared.

VolumeSnapshotError

(Appears on:VolumeSnapshotContentStatus, VolumeSnapshotStatus)

VolumeSnapshotError describes an error encountered during snapshot creation.

Field Description
time
Kubernetes meta/v1.Time
(Optional)

time is the timestamp when the error was encountered.

message
string
(Optional)

message is a string detailing the encountered error during snapshot creation if specified. NOTE: message may be logged, and it should not contain sensitive information.

VolumeSnapshotSource

(Appears on:VolumeSnapshotSpec)

VolumeSnapshotSource specifies whether the underlying snapshot should be dynamically taken upon creation or if a pre-existing VolumeSnapshotContent object should be used. Exactly one of its members must be set. Members in VolumeSnapshotSource are immutable.

Field Description
persistentVolumeClaimName
string
(Optional)

persistentVolumeClaimName specifies the name of the PersistentVolumeClaim object representing the volume from which a snapshot should be created. This PVC is assumed to be in the same namespace as the VolumeSnapshot object. This field should be set if the snapshot does not exists, and needs to be created. This field is immutable.

volumeSnapshotContentName
string
(Optional)

volumeSnapshotContentName specifies the name of a pre-existing VolumeSnapshotContent object representing an existing volume snapshot. This field should be set if the snapshot already exists and only needs a representation in Kubernetes. This field is immutable.

VolumeSnapshotSpec

(Appears on:VolumeSnapshot)

VolumeSnapshotSpec describes the common attributes of a volume snapshot.

Field Description
source
VolumeSnapshotSource

source specifies where a snapshot will be created from. This field is immutable after creation. Required.

volumeSnapshotClassName
string
(Optional)

VolumeSnapshotClassName is the name of the VolumeSnapshotClass requested by the VolumeSnapshot. VolumeSnapshotClassName may be left nil to indicate that the default SnapshotClass should be used. A given cluster may have multiple default Volume SnapshotClasses: one default per CSI Driver. If a VolumeSnapshot does not specify a SnapshotClass, VolumeSnapshotSource will be checked to figure out what the associated CSI Driver is, and the default VolumeSnapshotClass associated with that CSI Driver will be used. If more than one VolumeSnapshotClass exist for a given CSI Driver and more than one have been marked as default, CreateSnapshot will fail and generate an event. Empty string is not allowed for this field.

VolumeSnapshotStatus

(Appears on:VolumeSnapshot)

VolumeSnapshotStatus is the status of the VolumeSnapshot Note that CreationTime, RestoreSize, ReadyToUse, and Error are in both VolumeSnapshotStatus and VolumeSnapshotContentStatus. Fields in VolumeSnapshotStatus are updated based on fields in VolumeSnapshotContentStatus. They are eventual consistency. These fields are duplicate in both objects due to the following reasons: - Fields in VolumeSnapshotContentStatus can be used for filtering when importing a volumesnapshot. - VolumsnapshotStatus is used by end users because they cannot see VolumeSnapshotContent. - CSI snapshotter sidecar is light weight as it only watches VolumeSnapshotContent object, not VolumeSnapshot object.

Field Description
boundVolumeSnapshotContentName
string
(Optional)

boundVolumeSnapshotContentName is the name of the VolumeSnapshotContent object to which this VolumeSnapshot object intends to bind to. If not specified, it indicates that the VolumeSnapshot object has not been successfully bound to a VolumeSnapshotContent object yet. NOTE: To avoid possible security issues, consumers must verify binding between VolumeSnapshot and VolumeSnapshotContent objects is successful (by validating that both VolumeSnapshot and VolumeSnapshotContent point at each other) before using this object.

creationTime
Kubernetes meta/v1.Time
(Optional)

creationTime is the timestamp when the point-in-time snapshot is taken by the underlying storage system. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the “creation_time” value returned from CSI “CreateSnapshot” gRPC call. For a pre-existing snapshot, this field will be filled with the “creation_time” value returned from the CSI “ListSnapshots” gRPC call if the driver supports it. If not specified, it may indicate that the creation time of the snapshot is unknown.

readyToUse
bool
(Optional)

readyToUse indicates if the snapshot is ready to be used to restore a volume. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the “ready_to_use” value returned from CSI “CreateSnapshot” gRPC call. For a pre-existing snapshot, this field will be filled with the “ready_to_use” value returned from the CSI “ListSnapshots” gRPC call if the driver supports it, otherwise, this field will be set to “True”. If not specified, it means the readiness of a snapshot is unknown.

restoreSize
k8s.io/apimachinery/pkg/api/resource.Quantity
(Optional)

restoreSize represents the minimum size of volume required to create a volume from this snapshot. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the “size_bytes” value returned from CSI “CreateSnapshot” gRPC call. For a pre-existing snapshot, this field will be filled with the “size_bytes” value returned from the CSI “ListSnapshots” gRPC call if the driver supports it. When restoring a volume from this snapshot, the size of the volume MUST NOT be smaller than the restoreSize if it is specified, otherwise the restoration will fail. If not specified, it indicates that the size is unknown.

error
VolumeSnapshotError
(Optional)

error is the last observed error during snapshot creation, if any. This field could be helpful to upper level controllers(i.e., application controller) to decide whether they should continue on waiting for the snapshot to be created based on the type of error reported. The snapshot controller will keep retrying when an error occurs during the snapshot creation. Upon success, this error field will be cleared.


Generated with gen-crd-api-reference-docs on git commit b20011c8.

Troubleshooting

Known Issues

  • [minikube-3378]: Volume mount causes minikube VM to become corrupted

Common Errors

Node plugin pod does not start with RunContainerError status

kubectl describe pod your-nodeplugin-pod shows:

failed to start container "your-driver": Error response from daemon:
linux mounts: Path /var/lib/kubelet/pods is mounted on / but it is not a shared mount

Your Docker host is not configured to allow shared mounts. Take a look at this page for instructions to enable them.

External attacher can't find VolumeAttachments

If you have a Kubernetes 1.9 cluster, not being able to list VolumeAttachment and the following error are due to the lack of the storage.k8s.io/v1alpha1=true runtime configuration:

$ kubectl logs csi-pod external-attacher
...
I0306 16:34:50.976069       1 reflector.go:240] Listing and watching *v1alpha1.VolumeAttachment from github.com/kubernetes-csi/external-attacher/vendor/k8s.io/client-go/informers/factory.go:86

E0306 16:34:50.992034       1 reflector.go:205] github.com/kubernetes-csi/external-attacher/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1alpha1.VolumeAttachment: the server could not find the requested resource
...

Problems with the external components

The external components images are under active development. It can happen that they become incompatible with each other. If the issues above have been ruled out, contact the sig-storage team and/or run the e2e test:

go run hack/e2e.go -- --provider=local --test --test_args="--ginkgo.focus=Feature:CSI"