Introduction
Kubernetes Container Storage Interface (CSI) Documentation
This site documents how to develop, deploy, and test a Container Storage Interface (CSI) driver on Kubernetes.
The Container Storage Interface (CSI) is a standard for exposing arbitrary block and file storage systems to containerized workloads on Container Orchestration Systems (COs) like Kubernetes. Using CSI third-party storage providers can write and deploy plugins exposing new storage systems in Kubernetes without ever having to touch the core Kubernetes code.
The target audience for this site is third-party developers interested in developing CSI drivers for Kubernetes.
Kubernetes users interested in how to deploy or manage an existing CSI driver on Kubernetes should look at the documentation provided by the author of the CSI driver.
Kubernetes users interested in how to use a CSI driver should look at kubernetes.io documentation.
Kubernetes Releases
Kubernetes | CSI Spec Compatibility | Status |
---|---|---|
v1.9 | v0.1.0 | Alpha |
v1.10 | v0.2.0 | Beta |
v1.11 | v0.3.0 | Beta |
v1.13 | v0.3.0, v1.0.0 | GA |
Development and Deployment
Minimum Requirements (for Developing and Deploying a CSI driver for Kubernetes)
Kubernetes is as minimally prescriptive about packaging and deployment of a CSI Volume Driver as possible.
The only requirements are around how Kubernetes (master and node) components find and communicate with a CSI driver.
Specifically, the following is dictated by Kubernetes regarding CSI:
- Kubelet to CSI Driver Communication
- Kubelet directly issues CSI calls (like
NodeStageVolume
,NodePublishVolume
, etc.) to CSI drivers via a Unix Domain Socket to mount and unmount volumes. - Kubelet discovers CSI drivers (and the Unix Domain Socket to use to interact with a CSI driver) via the kubelet plugin registration mechanism.
- Therefore, all CSI drivers deployed on Kubernetes MUST register themselves using the kubelet plugin registration mechanism on each supported node.
- Kubelet directly issues CSI calls (like
- Master to CSI Driver Communication
- Kubernetes master components do not communicate directly (via a Unix Domain Socket or otherwise) with CSI drivers.
- Kubernetes master components interact only with the Kubernetes API.
- Therefore, CSI drivers that require operations that depend on the Kubernetes API (like volume create, volume attach, volume snapshot, etc.) MUST watch the Kubernetes API and trigger the appropriate CSI operations against it.
Because these requirements are minimally prescriptive, CSI driver developers are free to implement and deploy their drivers as they see fit.
That said, to ease development and deployment, the mechanism described below is recommended.
Recommended Mechanism (for Developing and Deploying a CSI driver for Kubernetes)
The Kubernetes development team has established a "Recommended Mechanism" for developing, deploying, and testing CSI Drivers on Kubernetes. It aims to reduce boilerplate code and simplify the overall process for CSI Driver developers.
This "Recommended Mechanism" makes use of the following components:
- Kubernetes CSI Sidecar Containers
- Kubernetes CSI objects
- CSI Driver Testing tools
To implement a CSI driver using this mechanism, a CSI driver developer should:
- Create a containerized application implementing the Identity, Node, and optionally the Controller services described in the CSI specification (the CSI driver container).
- See Developing CSI Driver for more information.
- Unit test it using csi-sanity.
- See Driver - Unit Testing for more information.
- Define Kubernetes API YAML files that deploy the CSI driver container along with appropriate sidecar containers.
- See Deploying in Kubernetes for more information.
- Deploy the driver on a Kubernetes cluster and run end-to-end functional tests on it.
Reference Links
Developing CSI Driver for Kubernetes
Remain Informed
All developers of CSI drivers should join https://groups.google.com/forum/#!forum/container-storage-interface-drivers-announce to remain informed about changes to CSI or Kubernetes that may affect existing CSI drivers.
Overview
The first step to creating a CSI driver is writing an application implementing the gRPC services described in the CSI specification
At a minimum, CSI drivers must implement the following CSI services:
- CSI
Identity
service- Enables callers (Kubernetes components and CSI sidecar containers) to identify the driver and what optional functionality it supports.
- CSI
Node
service- Only
NodePublishVolume
,NodeUnpublishVolume
, andNodeGetCapabilities
are required. - Required methods enable callers to make a volume available at a specified path and discover what optional functionality the driver supports.
- Only
All CSI services may be implemented in the same CSI driver application. The CSI driver application should be containerized to make it easy to deploy on Kubernetes. Once containerized, the CSI driver can be paired with CSI Sidecar Containers and deployed in node and/or controller mode as appropriate.
Capabilities
If your driver supports additional features, CSI "capabilities" can be used to advertise the optional methods/services it supports, for example:
CONTROLLER_SERVICE
(PluginCapability
)- The entire CSI
Controller
service is optional. This capability indicates the driver implement one or more of the methods in the CSIController
service.
- The entire CSI
VOLUME_ACCESSIBILITY_CONSTRAINTS
(PluginCapability
)- This capability indicates the volumes for this driver may not be equally accessible from all nodes in the cluster, and that the driver will return additional topology related information that Kubernetes can use to schedule workloads more intelligently or influence where a volume will be provisioned.
VolumeExpansion
(PluginCapability
)- This capability indicates the driver supports resizing (expanding) volumes after creation.
CREATE_DELETE_VOLUME
(ControllerServiceCapability
)- This capability indicates the driver supports dynamic volume provisioning and deleting.
PUBLISH_UNPUBLISH_VOLUME
(ControllerServiceCapability
)- This capability indicates the driver implements
ControllerPublishVolume
andControllerUnpublishVolume
-- operations that correspond to the Kubernetes volume attach/detach operations. This may, for example, result in a "volume attach" operation against the Google Cloud control plane to attach the specified volume to the specified node for the Google Cloud PD CSI Driver.
- This capability indicates the driver implements
CREATE_DELETE_SNAPSHOT
(ControllerServiceCapability
)- This capability indicates the driver supports provisioning volume snapshots and the ability to provision new volumes using those snapshots.
CLONE_VOLUME
(ControllerServiceCapability
)- This capability indicates the driver supports cloning of volumes.
STAGE_UNSTAGE_VOLUME
(NodeServiceCapability
)- This capability indicates the driver implements
NodeStageVolume
andNodeUnstageVolume
-- operations that correspond to the Kubernetes volume device mount/unmount operations. This may, for example, be used to create a global (per node) volume mount of a block storage device.
- This capability indicates the driver implements
This is an partial list, please see the CSI spec for a complete list of capabilities. Also see the Features section to understand how a feature integrates with Kubernetes.
Versioning, Support, and Kubernetes Compatibility
Versioning
Each Kubernetes CSI component version is expressed as x.y.z, where x is the major version, y is the minor version, and z is the patch version, following Semantic Versioning.
Patch version releases only contain bug fixes that do not break any backwards compatibility.
Minor version releases may contain new functionality that do not break any backwards compatibility (except for alpha features).
Major version releases may contain new functionality or fixes that may break backwards compatibility with previous major releases. Changes that require a major version increase include: removing or changing API, flags, or behavior, new RBAC requirements that are not opt-in, new Kubernetes minimum version requirements.
A litmus test for not breaking compatibility is to replace the image of a component in an existing deployment without changing that deployment in any other way.
To minimize the number of branches we need to support, we do not have a general policy for releasing new minor versions on older majors. We will make exceptions for work related to meeting production readiness requirements. Only the previous major version will be eligible for these exceptions, so long as the time between the previous major version and the current major version is under six months. For example, if "X.0.0" and "X+1.0.0" were released under six months apart, "X.0.0" would be eligible for new minor releases.
Support
The Kubernetes CSI project follows the broader Kubernetes project on support. Every minor release branch will be supported with patch releases on an as-needed basis for at least 1 year, starting with the first release of that minor version. In addition, the minor release branch will be supported for at least 3 months after the next minor version is released, to allow time to integrate with the latest release.
Alpha Features
Alpha features are subject to break or be removed across Kubernetes and CSI component releases. There is no guarantee alpha features will continue to function if upgrading the Kubernetes cluster or upgrading a CSI sidecar or controller.
Kubernetes Compatibility
Each release of a CSI component has a minimum, maximum and recommended Kubernetes version that it is compatible with.
Minimum Version
Minimum version specifies the lowest Kubernetes version where the component will function with the most basic functionality, and no additional features added later. Generally, this aligns with the Kubernetes version where that CSI spec version was added.
Maximum Version
The maximum Kubernetes version specifies the last working Kubernetes version for all beta and GA features that the component supports. This generally aligns with one Kubernetes release before support for the CSI spec version was removed or if a particular Kubernetes API or feature was removed.
Recommended Version
Note that any new features added may have dependencies on Kubernetes versions greater than the minimum Kubernetes version. The recommended Kubernetes version specifies the lowest Kubernetes version needed where all its supported features will function correctly. Trying to use a new sidecar feature on a Kubernetes cluster below the recommended Kubernetes version may fail to function correctly. For that reason, it is encouraged to stay as close to the recommended Kubernetes version as possible.
For more details on which features are supported with which Kubernetes versions and their corresponding CSI components, please see each feature's individual page.
Kubernetes Changelog
This page summarizes major CSI changes made in each Kubernetes release. For details on individual features, visit the Features section.
Kubernetes 1.28
Features
- Removals:
- Deprecations:
Kubernetes 1.27
Features
- Beta
- Alpha
Kubernetes 1.26
Features
- GA
- Delegate fsgroup to CSI driver
- Azure File CSI migration
- vSphere CSI migration
- Alpha
- Cross namespace volume provisioning
Kubernetes 1.25
Features
- GA
- CSI ephemeral inline volumes
- Core CSI migration
- AWS EBS CSI migration
- GCE PD CSI migration
- Beta
- vSphere CSI Migration (on by default)
- Portworx CSI Migration (off-by-default)
- Alpha
Deprecation
- In-tree plugin removal:
- AWS EBS
- Azure Disk
Kubernetes 1.24
Features
- GA
- Volume expansion
- Storage capacity tracking
- Azure Disk CSI Migration
- OpenStack Cinder CSI Migration
- Beta
- Volume populator
- Alpha
- SELinux relabeling with mount options
- Prevent volume mode conversion
Kubernetes 1.23
Features
- GA
- CSI fsgroup policy
- Non-recusrive fsgroup ownership
- Generic ephemeral volumes
- Beta
- Delegate fsgroup to CSI driver
- Azure Disk CSI Migration (on-by-default)
- AWS EBS CSI Migration (on-by-default)
- GCE PD CSI Migration (on-by-default)
- Alpha
- Recover from Expansion Failure
- Honor PV Reclaim Policy
- RBD CSI Migration
- Portworx CSI migration
Kubernetes 1.22
Features
- GA
- Windows CSI (CSI-Proxy API v1)
- Pod token requests (CSIServiceAccountToken)
- Alpha
- ReadWriteOncePod access mode
- Delegate fsgroup to CSI driver
- Generic data populators
Kubernetes 1.21
Features
- Beta
- Pod token requests (CSIServiceAccountToken)
- Storage capacity tracking
- Generic ephemeral volumes
Kubernetes 1.20
Breaking Changes
- Kubelet no longer creates the target_path for NodePublishVolume in accordance with the CSI spec. Kubelet also no longer checks if staging and target paths are mounts or corrupted. CSI drivers need to be idempotent and do any necessary mount verification.
Features
- GA
- Volume snapshots and restore
- Beta
- CSI fsgroup policy
- Non-recusrive fsgroup ownership
- Alpha
- Pod token requests (CSIServiceAccountToken)
Kubernetes 1.19
Deprecations
- Behaviour of NodeExpandVolume being called between NodeStage and NodePublish is deprecated for CSI volumes. CSI drivers should support calling NodeExpandVolume after NodePublish if they have node EXPAND_VOLUME capability
Features
- Beta
- CSI on Windows
- CSI migration for AzureDisk and vSphere drivers
- Alpha
- CSI fsgroup policy
- Generic ephemeral volumes
- Storage capacity tracking
- Volume health monitoring
Kubernetes 1.18
Deprecations
storage.k8s.io/v1beta1
CSIDriver
object has been deprecated and will be removed in a future release.- In a future release, kubelet will no longer create the CSI NodePublishVolume target directory, in accordance with the CSI specification. CSI drivers may need to be updated accordingly to properly create and process the target path.
Features
- GA
- Raw block volumes
- Volume cloning
- Skip attach
- Pod info on mount
- Beta
- CSI migration for Openstack cinder driver.
- Alpha
- CSI on Windows
storage.k8s.io/v1
CSIDriver
object introduced.
Kubernetes 1.17
Breaking Changes
- CSI 0.3 support has been removed. CSI 0.3 drivers will no longer function.
Deprecations
storage.k8s.io/v1beta1
CSINode
object has been deprecated and will be removed in a future release.
Features
- GA
- Volume topology
- Volume limits
- Beta
- Volume snapshots and restore
- CSI migration for AWS EBS and GCE PD drivers
storage.k8s.io/v1
CSINode
object introduced.
Kubernetes 1.16
Features
- Beta
- Volume cloning
- Volume expansion
- Ephemeral local volumes
Kubernetes 1.15
Features
- Volume capacity usage metrics
- Alpha
- Volume cloning
- Ephemeral local volumes
- Resizing secrets
Kubernetes 1.14
Breaking Changes
csi.storage.k8s.io/v1alpha1
CSINodeInfo
andCSIDriver
CRDs are no longer supported.
Features
- Beta
- Topology
- Raw block
- Skip attach
- Pod info on mount
- Alpha
- Volume expansion
storage.k8s.io/v1beta1
CSINode
andCSIDriver
objects introduced.
Kubernetes 1.13
Deprecations
- CSI spec 0.2 and 0.3 are deprecated and support will be removed in Kubernetes 1.17.
Features
- GA support added for CSI spec 1.0.
Kubernetes 1.12
Breaking Changes
Kubelet device plugin registration is enabled by default, which requires CSI
plugins to use driver-registrar:v0.3.0
to register with kubelet.
Features
- Alpha
- Snapshots
- Topology
- Skip attach
- Pod info on mount
csi.storage.k8s.io/v1alpha1
CSINodeInfo
andCSIDriver
CRDs were introduced and have to be installed before deploying a CSI driver.
Kubernetes 1.11
Features
- Beta support added for CSI spec 0.3.
- Alpha
- Raw block
Kubernetes 1.10
Breaking Changes
- CSI spec 0.1 is no longer supported.
Features
- Beta support added for CSI spec 0.2.
This added optional
NodeStageVolume
andNodeUnstageVolume
calls which map to KubernetesMountDevice
andUnmountDevice
operations.
Kubernetes 1.9
Features
- Alpha support added for CSI spec 0.1.
Kubernetes Cluster Controllers
The Kubernetes cluster controllers are responsible for managing snapshot objects and operations across multiple CSI drivers, so they should be bundled and deployed by the Kubernetes distributors as part of their Kubernetes cluster management process (independent of any CSI Driver).
The Kubernetes development team maintains the following Kubernetes cluster controllers:
Snapshot Controller
Status and Releases
Git Repository: https://github.com/kubernetes-csi/external-snapshotter
Status: GA v4.0.0+
When Volume Snapshot is promoted to Beta in Kubernetes 1.17, the CSI external-snapshotter sidecar controller is split into two controllers: a snapshot-controller and a CSI external-snapshotter sidecar. See the following table for snapshot-controller release information.
Supported Versions
Latest stable release | Branch | Min CSI Version | Max CSI Version | Container Image | Min K8s Version | Max K8s Version | Recommended K8s Version |
---|---|---|---|---|---|---|---|
external-snapshotter v6.3.0 | release-6.2 | v1.0.0 | - | registry.k8s.io/sig-storage/csi-snapshotter:v6.2.1 | v1.20 | - | v1.24 |
external-snapshotter v6.2.2 | release-6.2 | v1.0.0 | - | registry.k8s.io/sig-storage/csi-snapshotter:v6.2.1 | v1.20 | - | v1.24 |
Unsupported Versions
Latest stable release | Branch | Min CSI Version | Max CSI Version | Container Image | Min K8s Version | Max K8s Version | Recommended K8s Version |
---|---|---|---|---|---|---|---|
external-snapshotter v6.1.0 | release-6.1 | v1.0.0 | - | registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0 | v1.20 | - | v1.24 |
external-snapshotter v6.0.1 | release-6.0 | v1.0.0 | - | registry.k8s.io/sig-storage/snapshot-controller:v6.0.1 | v1.20 | - | v1.24 |
external-snapshotter v5.0.1 | release-5.0 | v1.0.0 | - | registry.k8s.io/sig-storage/snapshot-controller:v5.0.1 | v1.20 | - | v1.22 |
external-snapshotter v4.2.1 | release-4.2 | v1.0.0 | - | registry.k8s.io/sig-storage/snapshot-controller:v4.2.1 | v1.20 | - | v1.22 |
external-snapshotter v4.1.1 | release-4.1 | v1.0.0 | - | registry.k8s.io/sig-storage/snapshot-controller:v4.1.1 | v1.20 | - | v1.20 |
external-snapshotter v4.0.1 | release-4.0 | v1.0.0 | - | registry.k8s.io/sig-storage/snapshot-controller:v4.0.1 | v1.20 | - | v1.20 |
external-snapshotter v3.0.3 (beta) | release-3.0 | v1.0.0 | - | registry.k8s.io/sig-storage/snapshot-controller:v3.0.3 | v1.17 | - | v1.17 |
external-snapshotter v2.1.4 (beta) | release-2.1 | v1.0.0 | - | registry.k8s.io/sig-storage/snapshot-controller:v2.1.4 | v1.17 | - | v1.17 |
For more information on the CSI external-snapshotter sidecar, see this external-snapshotter page.
Description
The snapshot controller will be watching the Kubernetes API server for VolumeSnapshot
and VolumeSnapshotContent
CRD objects. The CSI external-snapshotter
sidecar only watches the Kubernetes API server for VolumeSnapshotContent
CRD objects. The snapshot controller will be creating the VolumeSnapshotContent
CRD object which triggers the CSI external-snapshotter
sidecar to create a snapshot on the storage system.
The snapshot controller will be watching for VolumeGroupSnapshot
and VolumeGroupSnapshotContent
CRD objects when Volume Group Snapshot support is enabled via the --enable-volume-group-snapshots
option.
For detailed snapshot beta design changes, see the design doc here.
For detailed information about volume snapshot and restore functionality, see Volume Snapshot & Restore.
For detailed information about volume group snapshot and restore functionality, see Volume Snapshot & Restore.
For detailed information (binary parameters, RBAC rules, etc.), see https://github.com/kubernetes-csi/external-snapshotter/blob/release-6.2/README.md.
Deployment
Kubernetes distributors should bundle and deploy the controller and CRDs as part of their Kubernetes cluster management process (independent of any CSI Driver).
If your cluster does not come pre-installed with the correct components, you may manually install these components by executing the following steps.
git clone https://github.com/kubernetes-csi/external-snapshotter/
cd ./external-snapshotter
git checkout release-6.2
kubectl kustomize client/config/crd | kubectl create -f -
kubectl -n kube-system kustomize deploy/kubernetes/snapshot-controller | kubectl create -f -
Snapshot Validation Webhook
Status and Releases
Git Repository: https://github.com/kubernetes-csi/external-snapshotter
Status: GA as of 4.0.0
There is a new validating webhook server which provides tightened validation on snapshot objects. This SHOULD be installed by the Kubernetes distros along with the snapshot-controller, not end users. It SHOULD be installed in all Kubernetes clusters that has the snapshot feature enabled.
Supported Versions
Latest stable release | Branch | Min CSI Version | Max CSI Version | Container Image | Min K8s Version | Max K8s Version | Recommended K8s Version |
---|---|---|---|---|---|---|---|
external-snapshotter v6.3.0 | release-6.2 | v1.0.0 | - | registry.k8s.io/sig-storage/csi-snapshotter:v6.2.1 | v1.20 | - | v1.24 |
external-snapshotter v6.2.2 | release-6.2 | v1.0.0 | - | registry.k8s.io/sig-storage/csi-snapshotter:v6.2.1 | v1.20 | - | v1.24 |
Unsupported Versions
Latest stable release | Branch | Min CSI Version | Max CSI Version | Container Image | Min K8s Version | Max K8s Version | Recommended K8s Version |
---|---|---|---|---|---|---|---|
external-snapshotter v6.1.0 | release-6.1 | v1.0.0 | - | registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0 | v1.20 | - | v1.24 |
snapshot-validation-webhook v6.0.1 | release-6.0 | v1.0.0 | - | registry.k8s.io/sig-storage/snapshot-validation-webhook:v6.0.1 | v1.20 | - | v1.24 |
snapshot-validation-webhook v5.0.1 | release-5.0 | v1.0.0 | - | registry.k8s.io/sig-storage/snapshot-validation-webhook:v5.0.1 | v1.20 | - | v1.22 |
snapshot-validation-webhook v4.2.1 | release-4.2 | v1.0.0 | - | registry.k8s.io/sig-storage/snapshot-validation-webhook:v4.2.1 | v1.20 | - | v1.22 |
snapshot-validation-webhook v4.1.1 | release-4.1 | v1.0.0 | - | registry.k8s.io/sig-storage/snapshot-validation-webhook:v4.1.0 | v1.20 | - | v1.20 |
snapshot-validation-webhook v4.0.1 | release-4.0 | v1.0.0 | - | registry.k8s.io/sig-storage/snapshot-validation-webhook:v4.0.1 | v1.20 | - | v1.20 |
snapshot-validation-webhook v3.0.3 | release-3.0 | v1.0.0 | - | registry.k8s.io/sig-storage/snapshot-validation-webhook:v3.0.3 | v1.17 | - | v1.17 |
Description
The snapshot validating webhook is an HTTP callback which responds to admission requests. It is part of a larger plan to tighten validation for volume snapshot objects. This webhook introduces the ratcheting validation mechanism targeting the tighter validation. The cluster admin or Kubernetes distribution admin should install the webhook alongside the snapshot controllers and CRDs.
:warning: WARNING: Cluster admins choosing not to install the webhook server and participate in the phased release process can cause future problems when upgrading from
v1beta1
tov1
volumesnapshot API, if there are currently persisted objects which fail the new stricter validation. Potential impacts include being unable to delete invalid snapshot objects.
Deployment
Kubernetes distributors should bundle and deploy the snapshot validation webhook along with the snapshot controller and CRDs as part of their Kubernetes cluster management process (independent of any CSI Driver).
Read more about how to install the example webhook here.
CSI Proxy
Status and Releases
Git Repository: https://github.com/kubernetes-csi/csi-proxy
Status: V1 starting with v1.0.0
Status | Min K8s Version | Max K8s Version |
---|---|---|
v0.1.0 | 1.18 | - |
v0.2.0+ | 1.18 | - |
v1.0.0+ | 1.18 | - |
Description
CSI Proxy is a binary that exposes a set of gRPC APIs around storage operations over named pipes in Windows. A container, such as CSI node plugins, can mount the named pipes depending on operations it wants to exercise on the host and invoke the APIs.
Each named pipe will support a specific version of an API (e.g. v1alpha1, v2beta1) that targets a specific area of storage (e.g. disk, volume, file, SMB, iSCSI). For example, \\.\pipe\csi-proxy-filesystem-v1alpha1
, \\.\pipe\csi-proxy-disk-v1beta1
. Any release of csi-proxy.exe binary will strive to maintain backward compatibility across as many prior stable versions of an API group as possible. Please see details in this CSI Windows support KEP
Usage
Run csi-proxy.exe binary directly on a Windows node. The command line options are:
-
-kubelet-path
: This is the prefix path of the kubelet directory in the host file system (the default value is set toC:\var\lib\kubelet
) -
-windows-service
: Configure as a Windows Service -
-log_file
: If non-empty, use this log file. (Note: must setlogtostdrr
=false if setting -log_file)
Note that -kubelet-pod-path
and -kubelet-csi-plugins-path
were used in prior 1.0.0 versions, and they are now replaced by new parameter -kubelet-path
For detailed information (binary parameters, etc.), see the README of the relevant branch.
Deployment
It the responsibility of the Kubernetes distribution or cluster admin to install csi-proxy. Directly run csi-proxy.exe binary or run it as a Windows Service on Kubernetes nodes. For example,
$flags = "-windows-service -log_file=\etc\kubernetes\logs\csi-proxy.log -logtostderr=false"
sc.exe create csiproxy binPath= "${env:NODE_DIR}\csi-proxy.exe $flags"
sc.exe failure csiproxy reset= 0 actions= restart/10000
sc.exe start csiproxy
Kubernetes CSI Sidecar Containers
Kubernetes CSI Sidecar Containers are a set of standard containers that aim to simplify the development and deployment of CSI Drivers on Kubernetes.
These containers contain common logic to watch the Kubernetes API, trigger appropriate operations against the “CSI volume driver” container, and update the Kubernetes API as appropriate.
The containers are intended to be bundled with third-party CSI driver containers and deployed together as pods.
The containers are developed and maintained by the Kubernetes Storage community.
Use of the containers is strictly optional, but highly recommended.
Benefits of these sidecar containers include:
- Reduction of "boilerplate" code.
- CSI Driver developers do not have to worry about complicated, "Kubernetes specific" code.
- Separation of concerns.
- Code that interacts with the Kubernetes API is isolated from (and in a different container than) the code that implements the CSI interface.
The Kubernetes development team maintains the following Kubernetes CSI Sidecar Containers:
- external-provisioner
- external-attacher
- external-snapshotter
- external-resizer
- node-driver-registrar
- cluster-driver-registrar (deprecated)
- livenessprobe
CSI external-attacher
Status and Releases
Git Repository: https://github.com/kubernetes-csi/external-attacher
Status: GA/Stable
Supported Versions
Latest stable release | Branch | Min CSI Version | Max CSI Version | Container Image | Min K8s Version | Max K8s Version | Recommended K8s Version |
---|---|---|---|---|---|---|---|
external-attacher v4.4.0 | release-4.4 | v1.0.0 | - | registry.k8s.io/sig-storage/csi-attacher:v4.4.0 | v1.17 | - | v1.27 |
external-attacher v4.3.0 | release-4.3 | v1.0.0 | - | registry.k8s.io/sig-storage/csi-attacher:v4.3.0 | v1.17 | - | v1.22 |
external-attacher v4.2.0 | release-4.2 | v1.0.0 | - | registry.k8s.io/sig-storage/csi-attacher:v4.2.0 | v1.17 | - | v1.22 |
external-attacher v4.1.0 | release-4.1 | v1.0.0 | - | registry.k8s.io/sig-storage/csi-attacher:v4.1.0 | v1.17 | - | v1.22 |
Unsupported Versions
Latest stable release | Branch | Min CSI Version | Max CSI Version | Container Image | Min K8s Version | Max K8s Version | Recommended K8s Version |
---|---|---|---|---|---|---|---|
external-attacher v4.0.0 | release-4.0 | v1.0.0 | - | registry.k8s.io/sig-storage/csi-attacher:v4.0.0 | v1.17 | - | v1.22 |
external-attacher v3.5.1 | release-3.5 | v1.0.0 | - | registry.k8s.io/sig-storage/csi-attacher:v3.5.1 | v1.17 | - | v1.22 |
external-attacher v3.4.0 | release-3.4 | v1.0.0 | - | registry.k8s.io/sig-storage/csi-attacher:v3.4.0 | v1.17 | - | v1.22 |
external-attacher v3.3.0 | release-3.3 | v1.0.0 | - | registry.k8s.io/sig-storage/csi-attacher:v3.3.0 | v1.17 | - | v1.22 |
external-attacher v3.2.1 | release-3.2 | v1.0.0 | - | registry.k8s.io/sig-storage/csi-attacher:v3.2.1 | v1.17 | - | v1.17 |
external-attacher v3.1.0 | release-3.1 | v1.0.0 | - | registry.k8s.io/sig-storage/csi-attacher:v3.1.0 | v1.17 | - | v1.17 |
external-attacher v3.0.2 | release-3.0 | v1.0.0 | - | registry.k8s.io/sig-storage/csi-attacher:v3.0.2 | v1.17 | - | v1.17 |
external-attacher v2.2.0 | release-2.2 | v1.0.0 | - | quay.io/k8scsi/csi-attacher:v2.2.0 | v1.14 | - | v1.17 |
external-attacher v2.1.0 | release-2.1 | v1.0.0 | - | quay.io/k8scsi/csi-attacher:v2.1.0 | v1.14 | - | v1.17 |
external-attacher v2.0.0 | release-2.0 | v1.0.0 | - | quay.io/k8scsi/csi-attacher:v2.0.0 | v1.14 | - | v1.15 |
external-attacher v1.2.1 | release-1.2 | v1.0.0 | - | quay.io/k8scsi/csi-attacher:v1.2.1 | v1.13 | - | v1.15 |
external-attacher v1.1.1 | release-1.1 | v1.0.0 | - | quay.io/k8scsi/csi-attacher:v1.1.1 | v1.13 | - | v1.14 |
external-attacher v0.4.2 | release-0.4 | v0.3.0 | v0.3.0 | quay.io/k8scsi/csi-attacher:v0.4.2 | v1.10 | v1.16 | v1.10 |
Description
The CSI external-attacher
is a sidecar container that watches the Kubernetes API server for VolumeAttachment
objects and triggers Controller[Publish|Unpublish]Volume
operations against a CSI endpoint.
Usage
CSI drivers that require integrating with the Kubernetes volume attach/detach hooks should use this sidecar container, and advertise the CSI PUBLISH_UNPUBLISH_VOLUME
controller capability.
For detailed information (binary parameters, RBAC rules, etc.), see https://github.com/kubernetes-csi/external-attacher/blob/master/README.md.
Deployment
The CSI external-attacher
is deployed as a controller. See deployment section for more details.
CSI external-provisioner
Status and Releases
Git Repository: https://github.com/kubernetes-csi/external-provisioner
Status: GA/Stable
Supported Versions
Latest stable release | Branch | Min CSI Version | Max CSI Version | Container Image | Min K8s Version | Max K8s Version | Recommended K8s Version |
---|---|---|---|---|---|---|---|
external-provisioner v3.6.0 | release-3.6 | v1.0.0 | - | registry.k8s.io/sig-storage/csi-provisioner:v3.6.0 | v1.20 | - | v1.27 |
external-provisioner v3.5.0 | release-3.5 | v1.0.0 | - | registry.k8s.io/sig-storage/csi-provisioner:v3.5.0 | v1.20 | - | v1.26 |
external-provisioner v3.4.1 | release-3.4 | v1.0.0 | - | registry.k8s.io/sig-storage/csi-provisioner:v3.4.1 | v1.20 | - | v1.26 |
Unsupported Versions
Latest stable release | Branch | Min CSI Version | Max CSI Version | Container Image | Min K8s Version | Max K8s Version | Recommended K8s Version |
---|---|---|---|---|---|---|---|
external-provisioner v3.3.1 | release-3.3 | v1.0.0 | - | registry.k8s.io/sig-storage/csi-provisioner:v3.3.1 | v1.20 | - | v1.25 |
external-provisioner v3.2.2 | release-3.2 | v1.0.0 | - | registry.k8s.io/sig-storage/csi-provisioner:v3.2.2 | v1.20 | - | v1.22 |
external-provisioner v3.1.1 | release-3.1 | v1.0.0 | - | registry.k8s.io/sig-storage/csi-provisioner:v3.1.1 | v1.20 | - | v1.22 |
external-provisioner v3.0.0 | release-3.0 | v1.0.0 | - | registry.k8s.io/sig-storage/csi-provisioner:v3.0.0 | v1.20 | - | v1.22 |
external-provisioner v2.2.2 | release-2.2 | v1.0.0 | - | registry.k8s.io/sig-storage/csi-provisioner:v2.2.2 | v1.17 | - | v1.21 |
external-provisioner v2.1.2 | release-2.1 | v1.0.0 | - | registry.k8s.io/sig-storage/csi-provisioner:v2.1.2 | v1.17 | - | v1.19 |
external-provisioner v2.0.5 | release-2.0 | v1.0.0 | - | registry.k8s.io/sig-storage/csi-provisioner:v2.0.5 | v1.17 | - | v1.19 |
external-provisioner v1.6.1 | release-1.6 | v1.0.0 | - | registry.k8s.io/sig-storage/csi-provisioner:v1.6.1 | v1.13 | v1.21 | v1.18 |
external-provisioner v1.5.0 | release-1.5 | v1.0.0 | - | quay.io/k8scsi/csi-provisioner:v1.5.0 | v1.13 | v1.21 | v1.17 |
external-provisioner v1.4.0 | release-1.4 | v1.0.0 | - | quay.io/k8scsi/csi-provisioner:v1.4.0 | v1.13 | v1.21 | v1.16 |
external-provisioner v1.3.1 | release-1.3 | v1.0.0 | - | quay.io/k8scsi/csi-provisioner:v1.3.1 | v1.13 | v1.19 | v1.15 |
external-provisioner v1.2.0 | release-1.2 | v1.0.0 | - | quay.io/k8scsi/csi-provisioner:v1.2.0 | v1.13 | v1.19 | v1.14 |
external-provisioner v0.4.2 | release-0.4 | v0.3.0 | v0.3.0 | quay.io/k8scsi/csi-provisioner:v0.4.2 | v1.10 | v1.16 | v1.10 |
Description
The CSI external-provisioner
is a sidecar container that watches the Kubernetes API server for PersistentVolumeClaim
objects.
It calls CreateVolume
against the specified CSI endpoint to provision a new volume.
Volume provisioning is triggered by the creation of a new Kubernetes PersistentVolumeClaim
object, if the PVC references a Kubernetes StorageClass
, and the name in the provisioner
field of the storage class matches the name returned by the specified CSI endpoint in the GetPluginInfo
call.
Once a new volume is successfully provisioned, the sidecar container creates a Kubernetes PersistentVolume
object to represent the volume.
The deletion of a PersistentVolumeClaim
object bound to a PersistentVolume
corresponding to this driver with a delete
reclaim policy causes the sidecar container to trigger a DeleteVolume
operation against the specified CSI endpoint to delete the volume. Once the volume is successfully deleted, the sidecar container also deletes the PersistentVolume
object representing the volume.
DataSources
The external-provisioner provides the ability to request a volume be pre-populated from a data source during provisioning. For more information on how data sources are handled see DataSources.
Snapshot
The CSI external-provisioner
supports the Snapshot
DataSource. If a Snapshot
CRD is specified as a data source on a PVC object, the sidecar container fetches the information about the snapshot by fetching the SnapshotContent
object and populates the data source field in the resulting CreateVolume
call to indicate to the storage system that the new volume should be populated using the specified snapshot.
PersistentVolumeClaim (clone)
Cloning is also implemented by specifying a kind:
of type PersistentVolumeClaim
in the DataSource field of a Provision request. It's the responsbility of the external-provisioner to verify that the claim specified in the DataSource object exists, is in the same storage class as the volume being provisioned and that the claim is currently Bound
.
StorageClass Parameters
When provisioning a new volume, the CSI external-provisioner
sets the map<string, string> parameters
field in the CSI CreateVolumeRequest
call to the key/values specified in the StorageClass
it is handling.
The CSI external-provisioner
(v1.0.1+) also reserves the parameter keys prefixed with csi.storage.k8s.io/
. Any StorageClass
keys prefixed with csi.storage.k8s.io/
are not passed to the CSI driver as an opaque parameter
.
The following reserved StorageClass
parameter keys trigger behavior in the CSI external-provisioner
:
csi.storage.k8s.io/provisioner-secret-name
csi.storage.k8s.io/provisioner-secret-namespace
csi.storage.k8s.io/controller-publish-secret-name
csi.storage.k8s.io/controller-publish-secret-namespace
csi.storage.k8s.io/node-stage-secret-name
csi.storage.k8s.io/node-stage-secret-namespace
csi.storage.k8s.io/node-publish-secret-name
csi.storage.k8s.io/node-publish-secret-namespace
csi.storage.k8s.io/fstype
If the PVC VolumeMode
is set to Filesystem
, and the value of csi.storage.k8s.io/fstype
is specified, it is used to populate the FsType
in CreateVolumeRequest.VolumeCapabilities[x].AccessType
and the AccessType
is set to Mount
.
For more information on how secrets are handled see Secrets & Credentials.
Example StorageClass
:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: gold-example-storage
provisioner: exampledriver.example.com
parameters:
disk-type: ssd
csi.storage.k8s.io/fstype: ext4
csi.storage.k8s.io/provisioner-secret-name: mysecret
csi.storage.k8s.io/provisioner-secret-namespace: mynamespace
PersistentVolumeClaim and PersistentVolume Parameters
The CSI external-provisioner
(v1.6.0+) introduces the --extra-create-metadata
flag, which automatically sets the following map<string, string> parameters
in the CSI CreateVolumeRequest
:
csi.storage.k8s.io/pvc/name
csi.storage.k8s.io/pvc/namespace
csi.storage.k8s.io/pv/name
These parameters are not part of the StorageClass
, but are internally generated using the name and namespace of the source PersistentVolumeClaim
and PersistentVolume
.
Usage
CSI drivers that support dynamic volume provisioning should use this sidecar container, and advertise the CSI CREATE_DELETE_VOLUME
controller capability.
For detailed information (binary parameters, RBAC rules, etc.), see https://github.com/kubernetes-csi/external-provisioner/blob/master/README.md.
Deployment
The CSI external-provisioner
is deployed as a controller. See deployment section for more details.
CSI external-resizer
Status and Releases
Git Repository: https://github.com/kubernetes-csi/external-resizer
Status: Beta starting with v0.3.0
Supported Versions
Latest stable release | Branch | Min CSI Version | Max CSI Version | Container Image | Min K8s Version | Max K8s Version | Recommended K8s Version |
---|---|---|---|---|---|---|---|
external-resizer v1.9.0 | release-1.9 | v1.5.0 | - | registry.k8s.io/sig-storage/csi-resizer:v1.9.0 | v1.16 | - | v1.28 |
external-resizer v1.8.0 | release-1.8 | v1.5.0 | - | registry.k8s.io/sig-storage/csi-resizer:v1.8.0 | v1.16 | - | v1.23 |
external-resizer v1.7.0 | release-1.7 | v1.5.0 | - | registry.k8s.io/sig-storage/csi-resizer:v1.7.0 | v1.16 | - | v1.23 |
Unsupported Versions
Latest stable release | Branch | Min CSI Version | Max CSI Version | Container Image | Min K8s Version | Max K8s Version | Recommended K8s Version |
---|---|---|---|---|---|---|---|
external-resizer v1.6.0 | release-1.6 | v1.5.0 | - | registry.k8s.io/sig-storage/csi-resizer:v1.6.0 | v1.16 | - | v1.23 |
external-resizer v1.5.0 | release-1.5 | v1.5.0 | - | registry.k8s.io/sig-storage/csi-resizer:v1.5.0 | v1.16 | - | v1.23 |
external-resizer v1.4.0 | release-1.4 | v1.5.0 | - | registry.k8s.io/sig-storage/csi-resizer:v1.4.0 | v1.16 | - | v1.23 |
external-resizer v1.3.0 | release-1.3 | v1.5.0 | - | registry.k8s.io/sig-storage/csi-resizer:v1.3.0 | v1.16 | - | v1.22 |
external-resizer v1.2.0 | release-1.2 | v1.2.0 | - | registry.k8s.io/sig-storage/csi-resizer:v1.2.0 | v1.16 | - | v1.21 |
external-resizer v1.1.0 | release-1.1 | v1.2.0 | - | registry.k8s.io/sig-storage/csi-resizer:v1.1.0 | v1.16 | - | v1.16 |
external-resizer v0.5.0 | release-0.5 | v1.2.0 | - | quay.io/k8scsi/csi-resizer:v0.5.0 | v1.15 | - | v1.16 |
external-resizer v0.2.0 | release-0.2 | v1.1.0 | - | quay.io/k8scsi/csi-resizer:v0.2.0 | v1.15 | - | v1.15 |
external-resizer v0.1.0 | release-0.1 | v1.1.0 | - | quay.io/k8scsi/csi-resizer:v0.1.0 | v1.14 | v1.14 | v1.14 |
external-resizer v1.0.1 | release-1.0 | v1.2.0 | - | quay.io/k8scsi/csi-resizer:v1.0.1 | v1.16 | - | v1.16 |
Description
The CSI external-resizer
is a sidecar container that watches the Kubernetes API server for PersistentVolumeClaim
object edits and
triggers ControllerExpandVolume
operations against a CSI endpoint if user requested more storage on PersistentVolumeClaim
object.
Usage
CSI drivers that support Kubernetes volume expansion should use this sidecar container, and advertise the CSI VolumeExpansion
plugin capability.
Deployment
The CSI external-resizer
is deployed as a controller. See deployment section for more details.
CSI external-snapshotter
Status and Releases
Git Repository: https://github.com/kubernetes-csi/external-snapshotter
Status: GA v4.0.0+
Supported Versions
Latest stable release | Branch | Min CSI Version | Max CSI Version | Container Image | Min K8s Version | Max K8s Version | Recommended K8s Version |
---|---|---|---|---|---|---|---|
external-snapshotter v6.3.0 | release-6.2 | v1.0.0 | - | registry.k8s.io/sig-storage/csi-snapshotter:v6.2.1 | v1.20 | - | v1.24 |
external-snapshotter v6.2.2 | release-6.2 | v1.0.0 | - | registry.k8s.io/sig-storage/csi-snapshotter:v6.2.1 | v1.20 | - | v1.24 |
Unsupported Versions
Latest stable release | Branch | Min CSI Version | Max CSI Version | Container Image | Min K8s Version | Max K8s Version | Recommended K8s Version |
---|---|---|---|---|---|---|---|
external-snapshotter v6.1.0 | release-6.1 | v1.0.0 | - | registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0 | v1.20 | - | v1.24 |
external-snapshotter v6.0.1 | release-6.0 | v1.0.0 | - | registry.k8s.io/sig-storage/csi-snapshotter:v6.0.1 | v1.20 | - | v1.24 |
external-snapshotter v5.0.1 | release-5.0 | v1.0.0 | - | registry.k8s.io/sig-storage/csi-snapshotter:v5.0.1 | v1.20 | - | v1.22 |
external-snapshotter v4.2.1 | release-4.2 | v1.0.0 | - | registry.k8s.io/sig-storage/csi-snapshotter:v4.2.1 | v1.20 | - | v1.22 |
external-snapshotter v4.1.1 | release-4.1 | v1.0.0 | - | registry.k8s.io/sig-storage/csi-snapshotter:v4.1.1 | v1.20 | - | v1.20 |
external-snapshotter v4.0.1 | release-4.0 | v1.0.0 | - | registry.k8s.io/sig-storage/csi-snapshotter:v4.0.1 | v1.20 | - | v1.20 |
external-snapshotter v3.0.3 (beta) | release-3.0 | v1.0.0 | - | registry.k8s.io/sig-storage/csi-snapshotter:v3.0.3 | v1.17 | - | v1.17 |
external-snapshotter v2.1.4 (beta) | release-2.1 | v1.0.0 | - | registry.k8s.io/sig-storage/csi-snapshotter:v2.1.4 | v1.17 | - | v1.17 |
external-snapshotter v1.2.2 (alpha) | release-1.2 | v1.0.0 | - | /registry.k8s.io/sig-storage/csi-snapshotter:v1.2.2 | v1.13 | v1.16 | v1.14 |
external-snapshotter v0.4.2 (alpha) | release-0.4 | v0.3.0 | v0.3.0 | quay.io/k8scsi/csi-snapshotter:v0.4.2 | v1.12 | v1.16 | v1.12 |
To use the snapshot beta and GA feature, a snapshot controller is also required. For more information, see this snapshot-controller page.
Snapshot Beta/GA
Description
The CSI external-snapshotter
sidecar watches the Kubernetes API server for VolumeSnapshotContent
CRD objects. The CSI external-snapshotter
sidecar is also responsible for calling the CSI RPCs CreateSnapshot
, DeleteSnapshot
, and ListSnapshots
.
Volume Group Snapshot support can be enabled with the --enable-volume-group-snapshots
option. When enabled, the CSI external-snapshotter
sidecar watches the API server for VolumeGroupSnapshotContent
CRD object, and will be responsible for calling the CSI RPCs CreateVolumeGroupSnapshot
, DeleteVolumeGroupSnapshot
and GetVolumeGroupSnapshot
.
VolumeSnapshotClass and VolumeGroupSnapshotClass Parameters
When provisioning a new volume snapshot, the CSI external-snapshotter
sets the map<string, string> parameters
field in the CSI CreateSnapshotRequest
call to the key/values specified in the VolumeSnapshotClass
it is handling.
When volume group snapshot support is enabled, the map<string, string> parameters
field is set in the CSI CreateVolumeGroupSnapshotRequest
call to the key/values specified in the VolumeGroupSnapshotClass
it is handling.
The CSI external-snapshotter
also reserves the parameter keys prefixed with csi.storage.k8s.io/
. Any VolumeSnapshotClass
or VolumeGroupSnapshotClass
keys prefixed with csi.storage.k8s.io/
are not passed to the CSI driver as an opaque parameter
.
The following reserved VolumeSnapshotClass
parameter keys trigger behavior in the CSI external-snapshotter
:
csi.storage.k8s.io/snapshotter-secret-name
(v1.0.1+)csi.storage.k8s.io/snapshotter-secret-namespace
(v1.0.1+)csi.storage.k8s.io/snapshotter-list-secret-name
(v2.1.0+)csi.storage.k8s.io/snapshotter-list-secret-namespace
(v2.1.0+)
For more information on how secrets are handled see Secrets & Credentials.
VolumeSnapshot, VolumeSnapshotContent, VolumeGroupSnapshot and VolumeGroupSnapshotContent Parameters
The CSI external-snapshotter
(v4.0.0+) introduces the --extra-create-metadata
flag, which automatically sets the following map<string, string> parameters
in the CSI CreateSnapshotRequest
and CreateVolumeGroupSnapshotRequest
:
csi.storage.k8s.io/volumesnapshot/name
csi.storage.k8s.io/volumesnapshot/namespace
csi.storage.k8s.io/volumesnapshotcontent/name
These parameters are internally generated using the name and namespace of the source VolumeSnapshot
and VolumeSnapshotContent
.
For detailed snapshot beta design changes, see the design doc here.
For detailed information about volume snapshot and restore functionality, see Volume Snapshot & Restore.
Usage
CSI drivers that support provisioning volume snapshots and the ability to provision new volumes using those snapshots should use this sidecar container, and advertise the CSI CREATE_DELETE_SNAPSHOT
controller capability.
CSI drivers that support provisioning volume group snapshots should use this side container too, and advertise the CSI CREATE_DELETE_GET_VOLUME_GROUP_SNAPSHOT
controller capability.
For detailed information (binary parameters, RBAC rules, etc.), see https://github.com/kubernetes-csi/external-snapshotter/blob/release-6.2/README.md.
Deployment
The CSI external-snapshotter
is deployed as a sidecar controller. See deployment section for more details.
For an example deployment, see this example which deploys external-snapshotter
and external-provisioner
with the Hostpath CSI driver.
Snapshot Alpha
Description
The CSI external-snapshotter
is a sidecar container that watches the Kubernetes API server for VolumeSnapshot
and VolumeSnapshotContent
CRD objects.
The creation of a new VolumeSnapshot
object referencing a SnapshotClass
CRD object corresponding to this driver causes the sidecar container to trigger a CreateSnapshot
operation against the specified CSI endpoint to provision a new snapshot. When a new snapshot is successfully provisioned, the sidecar container creates a Kubernetes VolumeSnapshotContent
object to represent the new snapshot.
The deletion of a VolumeSnapshot
object bound to a VolumeSnapshotContent
corresponding to this driver with a delete
deletion policy causes the sidecar container to trigger a DeleteSnapshot
operation against the specified CSI endpoint to delete the snapshot. Once the snapshot is successfully deleted, the sidecar container also deletes the VolumeSnapshotContent
object representing the snapshot.
Usage
CSI drivers that support provisioning volume snapshots and the ability to provision new volumes using those snapshots should use this sidecar container, and advertise the CSI CREATE_DELETE_SNAPSHOT
controller capability.
For detailed information (binary parameters, RBAC rules, etc.), see https://github.com/kubernetes-csi/external-snapshotter/blob/release-1.2/README.md.
Deployment
The CSI external-snapshotter
is deployed as a controller. See deployment section for more details.
For an example deployment, see this example which deploys external-snapshotter
and external-provisioner
with the Hostpath CSI driver.
CSI livenessprobe
Status and Releases
Git Repository: https://github.com/kubernetes-csi/livenessprobe
Status: GA/Stable
Supported Versions
Latest stable release | Branch | Min CSI Version | Max CSI Version | Container Image | Min K8s Version | Max K8s Version |
---|---|---|---|---|---|---|
livenessprobe v2.12.0 | release-2.12 | v1.0.0 | - | registry.k8s.io/sig-storage/livenessprobe:v2.12.0 | v1.13 | - |
livenessprobe v2.11.0 | release-2.11 | v1.0.0 | - | registry.k8s.io/sig-storage/livenessprobe:v2.11.0 | v1.13 | - |
livenessprobe v2.10.0 | release-2.10 | v1.0.0 | - | registry.k8s.io/sig-storage/livenessprobe:v2.10.0 | v1.13 | - |
livenessprobe v2.9.0 | release-2.9 | v1.0.0 | - | registry.k8s.io/sig-storage/livenessprobe:v2.9.0 | v1.13 | - |
livenessprobe v2.8.0 | release-2.8 | v1.0.0 | - | registry.k8s.io/sig-storage/livenessprobe:v2.8.0 | v1.13 | - |
Unsupported Versions
Latest stable release | Branch | Min CSI Version | Max CSI Version | Container Image | Min K8s Version | Max K8s Version |
---|---|---|---|---|---|---|
livenessprobe v2.7.0 | release-2.7 | v1.0.0 | - | registry.k8s.io/sig-storage/livenessprobe:v2.7.0 | v1.13 | - |
livenessprobe v2.6.0 | release-2.6 | v1.0.0 | - | registry.k8s.io/sig-storage/livenessprobe:v2.6.0 | v1.13 | - |
livenessprobe v2.5.0 | release-2.5 | v1.0.0 | - | registry.k8s.io/sig-storage/livenessprobe:v2.5.0 | v1.13 | - |
livenessprobe v2.4.0 | release-2.4 | v1.0.0 | - | registry.k8s.io/sig-storage/livenessprobe:v2.4.0 | v1.13 | - |
livenessprobe v2.3.0 | release-2.3 | v1.0.0 | - | registry.k8s.io/sig-storage/livenessprobe:v2.3.0 | v1.13 | - |
livenessprobe v2.2.0 | release-2.2 | v1.0.0 | - | registry.k8s.io/sig-storage/livenessprobe:v2.2.0 | v1.13 | - |
livenessprobe v2.1.0 | release-2.1 | v1.0.0 | - | registry.k8s.io/sig-storage/livenessprobe:v2.1.0 | v1.13 | - |
livenessprobe v2.0.0 | release-2.0 | v1.0.0 | - | quay.io/k8scsi/livenessprobe:v2.0.0 | v1.13 | - |
livenessprobe v1.1.0 | release-1.1 | v1.0.0 | - | quay.io/k8scsi/livenessprobe:v1.1.0 | v1.13 | - |
Unsupported. | No 0.x branch. | v0.3.0 | v0.3.0 | quay.io/k8scsi/livenessprobe:v0.4.1 | v1.10 | v1.16 |
Description
The CSI livenessprobe
is a sidecar container that monitors the health of the CSI driver and reports it to Kubernetes via the Liveness Probe mechanism. This enables Kubernetes to automatically detect issues with the driver and restart the pod to try and fix the issue.
Usage
All CSI drivers should use the liveness probe to improve the availability of the driver while deployed on Kubernetes.
For detailed information (binary parameters, RBAC rules, etc.), see https://github.com/kubernetes-csi/livenessprobe/blob/master/README.md.
Deployment
The CSI livenessprobe
is deployed as part of controller and node deployments. See deployment section for more details.
CSI node-driver-registrar
Status and Releases
Git Repository: https://github.com/kubernetes-csi/node-driver-registrar
Status: GA/Stable
Supported Versions
Latest stable release | Branch | Min CSI Version | Max CSI Version | Container Image | Min K8s Version | Max K8s Version | Recommended K8s Version |
---|---|---|---|---|---|---|---|
node-driver-registrar v2.9.0 | release-2.8 | v1.0.0 | - | registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.9.0 | v1.13 | - | 1.25 |
node-driver-registrar v2.8.0 | release-2.8 | v1.0.0 | - | registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.8.0 | v1.13 | - | |
node-driver-registrar v2.7.0 | release-2.7 | v1.0.0 | - | registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.7.0 | v1.13 | - | |
node-driver-registrar v2.6.3 | release-2.6 | v1.0.0 | - | registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.3 | v1.13 | - |
Unsupported Versions
Latest stable release | Branch | Min CSI Version | Max CSI Version | Container Image | Min K8s Version | Max K8s Version | Recommended K8s Version |
---|---|---|---|---|---|---|---|
node-driver-registrar v2.5.1 | release-2.5 | v1.0.0 | - | registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1 | v1.13 | - | |
node-driver-registrar v2.4.0 | release-2.4 | v1.0.0 | - | registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.4.0 | v1.13 | - | |
node-driver-registrar v2.3.0 | release-2.3 | v1.0.0 | - | registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.3.0 | v1.13 | - | |
node-driver-registrar v2.2.0 | release-2.2 | v1.0.0 | - | registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.2.0 | v1.13 | - | |
node-driver-registrar v2.1.0 | release-2.1 | v1.0.0 | - | registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.1.0 | v1.13 | - | |
node-driver-registrar v2.0.0 | release-2.0 | v1.0.0 | - | registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.0.0 | v1.13 | - | |
node-driver-registrar v1.2.0 | release-1.2 | v1.0.0 | - | quay.io/k8scsi/csi-node-driver-registrar:v1.2.0 | v1.13 | - | |
driver-registrar v0.4.2 | release-0.4 | v0.3.0 | v0.3.0 | quay.io/k8scsi/driver-registrar:v0.4.2 | v1.10 | v1.16 |
Description
The CSI node-driver-registrar
is a sidecar container that fetches driver information (using NodeGetInfo
) from a CSI endpoint and registers it with the kubelet on that node using the kubelet plugin registration mechanism.
Usage
Kubelet directly issues CSI NodeGetInfo
, NodeStageVolume
, and NodePublishVolume
calls against CSI drivers. It uses the kubelet plugin registration mechanism to discover the unix domain socket to talk to the CSI driver. Therefore, all CSI drivers should use this sidecar container to register themselves with kubelet.
For detailed information (binary parameters, etc.), see the README of the relevant branch.
Deployment
The CSI node-driver-registrar
is deployed per node. See deployment section for more details.
CSI cluster-driver-registrar
Deprecated
This sidecar container was not updated since Kubernetes 1.13. As of Kubernetes 1.16, this side car container is officially deprecated.
The purpose of this side car container was to automatically register a CSIDriver object containing information about the driver with Kubernetes. Without this side car, developers and CSI driver vendors will now have to add a CSIDriver object in their installation manifest or any tool that installs their CSI driver.
Please see CSIDriver for more information.
Status and Releases
Git Repository: https://github.com/kubernetes-csi/cluster-driver-registrar
Status: Alpha
Latest stable release | Branch | Compatible with CSI Version | Container Image | Min k8s Version | Max k8s version |
---|---|---|---|---|---|
cluster-driver-registrar v1.0.1 | release-1.0 | v1.0.0 | quay.io/k8scsi/csi-cluster-driver-registrar:v1.0.1 | v1.13 | - |
driver-registrar v0.4.2 | release-0.4 | v0.3.0 | quay.io/k8scsi/driver-registrar:v0.4.2 | v1.10 | - |
Description
The CSI cluster-driver-registrar
is a sidecar container that registers a CSI Driver with a Kubernetes cluster by creating a CSIDriver Object which enables the driver to customize how Kubernetes interacts with it.
Usage
CSI drivers that use one of the following Kubernetes features should use this sidecar container:
- Skip Attach
- For drivers that don't support
ControllerPublishVolume
, this indicates to Kubernetes to skip the attach operation and eliminates the need to deploy theexternal-attacher
sidecar.
- For drivers that don't support
- Pod Info on Mount
- This causes Kubernetes to pass metadata such as Pod name and namespace to the
NodePublishVolume
call.
- This causes Kubernetes to pass metadata such as Pod name and namespace to the
If you are not using one of these features, this sidecar container (and the creation of the CSIDriver Object) is not required. However, it is still recommended, because the CSIDriver Object makes it easier for users to easily discover the CSI drivers installed on their clusters.
For detailed information (binary parameters, etc.), see the README of the relevant branch.
Deployment
The CSI cluster-driver-registrar
is deployed as a controller. See deployment section for more details.
CSI external-health-monitor-controller
Status and Releases
Git Repository: https://github.com/kubernetes-csi/external-health-monitor
Status: Alpha
Supported Versions
Latest stable release | Branch | Min CSI Version | Max CSI Version | Container Image |
---|---|---|---|---|
external-health-monitor-controller v0.10.0 | release-0.8 | v1.3.0 | - | registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.10.0 |
external-health-monitor-controller v0.9.0 | release-0.8 | v1.3.0 | - | registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.9.0 |
external-health-monitor-controller v0.8.0 | release-0.8 | v1.3.0 | - | registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.8.0 |
Unsupported Versions
Latest stable release | Branch | Min CSI Version | Max CSI Version | Container Image |
---|---|---|---|---|
external-health-monitor-controller v0.7.0 | release-0.7 | v1.3.0 | - | registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0 |
external-health-monitor-controller v0.6.0 | release-0.6 | v1.3.0 | - | registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.6.0 |
external-health-monitor-controller v0.4.0 | release-0.4 | v1.3.0 | - | registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.4.0 |
external-health-monitor-controller v0.3.0 | release-0.3 | v1.3.0 | - | registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.3.0 |
external-health-monitor-controller v0.2.0 | release-0.2 | v1.3.0 | - | registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.2.0 |
Description
The CSI external-health-monitor-controller
is a sidecar container that is deployed together with the CSI controller driver, similar to how the CSI external-provisioner
sidecar is deployed. It calls the CSI controller RPC ListVolumes
or ControllerGetVolume
to check the health condition of the CSI volumes and report events on PersistentVolumeClaim
if the condition of a volume is abnormal
.
The CSI external-health-monitor-controller
also watches for node failure events. This component can be enabled by setting the enable-node-watcher
flag to true
. This will only have effects on local PVs now. When a node failure event is detected, an event will be reported on the PVC to indicate that pods using this PVC are on a failed node.
Usage
CSI drivers that support VOLUME_CONDITION
and LIST_VOLUMES
or VOLUME_CONDITION
and GET_VOLUME
controller capabilities should use this sidecar container.
For detailed information (binary parameters, RBAC rules, etc.), see https://github.com/kubernetes-csi/external-health-monitor/blob/master/README.md.
Deployment
The CSI external-health-monitor-controller
is deployed as a controller. See https://github.com/kubernetes-csi/external-health-monitor/blob/master/README.md for more details.
CSI external-health-monitor-agent
Status and Releases
Git Repository: https://github.com/kubernetes-csi/external-health-monitor
Status: Deprecated
Unsupported Versions
Latest stable release | Branch | Min CSI Version | Max CSI Version | Container Image |
---|---|---|---|---|
external-health-monitor-agent v0.2.0 | release-0.2 | v1.3.0 | - | registry.k8s.io/sig-storage/csi-external-health-monitor-agent:v0.2.0 |
Description
Note: This sidecar has been deprecated and replaced with the CSIVolumeHealth feature in Kubernetes.
The CSI external-health-monitor-agent
is a sidecar container that is deployed together with the CSI node driver, similar to how the CSI node-driver-registrar
sidecar is deployed. It calls the CSI node RPC NodeGetVolumeStats
to check the health condition of the CSI volumes and report events on Pod
if the condition of a volume is abnormal
.
Usage
CSI drivers that support VOLUME_CONDITION
and NODE_GET_VOLUME_STATS
node capabilities should use this sidecar container.
For detailed information (binary parameters, RBAC rules, etc.), see https://github.com/kubernetes-csi/external-health-monitor/blob/master/README.md.
Deployment
The CSI external-health-monitor-agent
is deployed as a DaemonSet. See https://github.com/kubernetes-csi/external-health-monitor/blob/master/README.md for more details.
CSI objects
The Kubernetes API contains the following CSI specific objects:
The schema definition for the objects can be found in the Kubernetes API reference
CSIDriver Object
Status
- Kubernetes 1.12 - 1.13: Alpha
- Kubernetes 1.14: Beta
- Kubernetes 1.18: GA
What is the CSIDriver object?
The CSIDriver
Kubernetes API object serves two purposes:
- Simplify driver discovery
- If a CSI driver creates a
CSIDriver
object, Kubernetes users can easily discover the CSI Drivers installed on their cluster (simply by issuingkubectl get CSIDriver
)
- Customizing Kubernetes behavior
- Kubernetes has a default set of behaviors when dealing with CSI Drivers (for example, it calls the
Attach
/Detach
operations by default). This object allows CSI drivers to specify how Kubernetes should interact with it.
What fields does the CSIDriver
object have?
Here is an example of a v1 CSIDriver
object:
apiVersion: storage.k8s.io/v1
kind: CSIDriver
metadata:
name: mycsidriver.example.com
spec:
attachRequired: true
podInfoOnMount: true
fsGroupPolicy: File # added in Kubernetes 1.19, this field is GA as of Kubernetes 1.23
volumeLifecycleModes: # added in Kubernetes 1.16, this field is beta
- Persistent
- Ephemeral
tokenRequests: # added in Kubernetes 1.20. See status at https://kubernetes-csi.github.io/docs/token-requests.html#status
- audience: "gcp"
- audience: "" # empty string means defaulting to the `--api-audiences` of kube-apiserver
expirationSeconds: 3600
requiresRepublish: true # added in Kubernetes 1.20. See status at https://kubernetes-csi.github.io/docs/token-requests.html#status
seLinuxMount: true # Added in Kubernetest 1.25.
These are the important fields:
name
- This should correspond to the full name of the CSI driver.
attachRequired
- Indicates this CSI volume driver requires an attach operation (because it implements the CSI
ControllerPublishVolume
method), and that Kubernetes should call attach and wait for any attach operation to complete before proceeding to mounting. - If a
CSIDriver
object does not exist for a given CSI Driver, the default istrue
-- meaning attach will be called. - If a
CSIDriver
object exists for a given CSI Driver, but this field is not specified, it also defaults totrue
-- meaning attach will be called. - For more information see Skip Attach.
- Indicates this CSI volume driver requires an attach operation (because it implements the CSI
podInfoOnMount
- Indicates this CSI volume driver requires additional pod information (like pod name, pod UID, etc.) during mount operations.
- If value is not specified or
false
, pod information will not be passed on mount. - If value is set to
true
, Kubelet will pass pod information asvolume_context
in CSINodePublishVolume
calls:"csi.storage.k8s.io/pod.name": pod.Name
"csi.storage.k8s.io/pod.namespace": pod.Namespace
"csi.storage.k8s.io/pod.uid": string(pod.UID)
"csi.storage.k8s.io/serviceAccount.name": pod.Spec.ServiceAccountName
- For more information see Pod Info on Mount.
fsGroupPolicy
- This field was added in Kubernetes 1.19 and cannot be set when using an older Kubernetes release.
- This field is beta in Kubernetes 1.20 and GA in Kubernetes 1.23.
- Controls if this CSI volume driver supports volume ownership and permission changes when volumes are mounted.
- The following modes are supported, and if not specified the default is
ReadWriteOnceWithFSType
:None
: Indicates that volumes will be mounted with no modifications, as the CSI volume driver does not support these operations.File
: Indicates that the CSI volume driver supports volume ownership and permission change via fsGroup, and Kubernetes may use fsGroup to change permissions and ownership of the volume to match user requested fsGroup in the pod's SecurityPolicy regardless of fstype or access mode.ReadWriteOnceWithFSType
: Indicates that volumes will be examined to determine if volume ownership and permissions should be modified to match the pod's security policy. Changes will only occur if thefsType
is defined and the persistent volume'saccessModes
containsReadWriteOnce
. This is the default behavior if no other FSGroupPolicy is defined.
- For more information see CSI Driver fsGroup Support.
volumeLifecycleModes
- This field was added in Kubernetes 1.16 and cannot be set when using an older Kubernetes release.
- This field is beta.
- It informs Kubernetes about the volume modes that are supported by the driver.
This ensures that the driver is not used incorrectly by users.
The default is
Persistent
, which is the normal PVC/PV mechanism.Ephemeral
enables inline ephemeral volumes in addition (when both are listed) or instead of normal volumes (when it is the only entry in the list).
tokenRequests
- This field was added in Kubernetes 1.20 and cannot be set when using an older Kubernetes release.
- This field is enabled by default in Kubernetes 1.21 and cannot be disabled since 1.22.
- If this field is specified, Kubelet will plumb down the bound service account tokens of the pod as
volume_context
in theNodePublishVolume
:"csi.storage.k8s.io/serviceAccount.tokens": {"gcp":{"token":"<token>","expirationTimestamp":"<expiration timestamp in RFC3339>"}}
- If CSI driver doesn't find token recorded in the
volume_context
, it should return error inNodePublishVolume
to inform Kubelet to retry. - Audiences should be distinct, otherwise the validation will fail. If the audience is "", it means the issued token has the same audience as kube-apiserver.
requiresRepublish
- This field was added in Kubernetes 1.20 and cannot be set when using an older Kubernetes release.
- This field is enabled by default in Kubernetes 1.21 and cannot be disabled since 1.22.
- If this field is
true
, Kubelet will periodically callNodePublishVolume
. This is useful in the following scenarios:- If the volume mounted by CSI driver is short-lived.
- If CSI driver requires valid service account tokens (enabled by the field
tokenRequests
) repeatedly.
- CSI drivers should only atomically update the contents of the volume. Mount point change will not be seen by a running container.
seLinuxMount
- This field is alpha in Kubernetes 1.25. It must be explicitly enabled by setting feature gates
ReadWriteOncePod
andSELinuxMountReadWriteOncePod
. - The default value of this field is
false
. - When set to
true
, corresponding CSI driver announces that all its volumes are independent volumes from Linux kernel point of view and each of them can be mounted with a different SELinux label mount option (-o context=<SELinux label>
). Examples:- A CSI driver that creates block devices formatted with a filesystem, such as
xfs
orext4
, can setseLinuxMount: true
, because each volume has its own block device. - A CSI driver whose volumes are always separate exports on a NFS server can set
seLinuxMount: true
, because each volume has its own NFS export and thus Linux kernel treats them as independent volumes. - A CSI driver that can provide two volumes as subdirectories of a common NFS export must set
seLinuxMount: false
, because these two volumes are treated as a single volume by Linux kernel and must share the same-o context=<SELinux label>
option.
- A CSI driver that creates block devices formatted with a filesystem, such as
- See corresponding KEP for details.
- Always test Pods with various SELinux contexts with various volume configurations before setting this field to
true
!
- This field is alpha in Kubernetes 1.25. It must be explicitly enabled by setting feature gates
What creates the CSIDriver object?
To install, a CSI driver's deployment manifest must contain a CSIDriver
object as shown in the example above.
NOTE: The cluster-driver-registrar side-car which was used to create CSIDriver objects in Kubernetes 1.13 has been deprecated for Kubernetes 1.16. No cluster-driver-registrar has been released for Kubernetes 1.14 and later.
CSIDriver
instance should exist for whole lifetime of all pods that use
volumes provided by corresponding CSI driver, so Skip Attach
and Pod Info on Mount features work correctly.
Listing registered CSI drivers
Using the CSIDriver
object, it is now possible to query Kubernetes to get a list of registered drivers running in the cluster as shown below:
$> kubectl get csidrivers.storage.k8s.io
NAME ATTACHREQUIRED PODINFOONMOUNT MODES AGE
mycsidriver.example.com true true Persistent,Ephemeral 2m46s
Or get a more detailed view of your registered driver with:
$> kubectl describe csidrivers.storage.k8s.io
Name: mycsidriver.example.com
Namespace:
Labels: <none>
Annotations: <none>
API Version: storage.k8s.io/v1
Kind: CSIDriver
Metadata:
Creation Timestamp: 2022-04-07T05:58:06Z
Managed Fields:
API Version: storage.k8s.io/v1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:kubectl.kubernetes.io/last-applied-configuration:
f:spec:
f:attachRequired:
f:fsGroupPolicy:
f:podInfoOnMount:
f:requiresRepublish:
f:tokenRequests:
f:volumeLifecycleModes:
.:
v:"Ephemeral":
v:"Persistent":
Manager: kubectl-client-side-apply
Operation: Update
Time: 2022-04-07T05:58:06Z
Resource Version: 896
UID: 6cc7d513-6d72-4203-87d3-730f83884f89
Spec:
Attach Required: true
Fs Group Policy: File
Pod Info On Mount: true
Volume Lifecycle Modes:
Persistent
Ephemeral
Events: <none>
Changes from Alpha to Beta
CRD to Built in Type
During alpha development, the CSIDriver
object was also defined as a Custom Resource Definition (CRD). As part of the promotion to beta the object has been moved to the built-in Kubernetes API.
In the move from alpha to beta, the API Group for this object changed from csi.storage.k8s.io/v1alpha1
to storage.k8s.io/v1beta1
.
There is no automatic update of existing CRDs and their CRs during Kubernetes update to the new build-in type.
Enabling CSIDriver on Kubernetes
In Kubernetes v1.12 and v1.13, because the feature was alpha, it was disabled by default. To enable the use of CSIDriver
on these versions, do the following:
- Ensure the feature gate is enabled via the following Kubernetes feature flag:
--feature-gates=CSIDriverRegistry=true
- Either ensure the
CSIDriver
CRD is automatically installed via the Kubernetes Storage CRD addon OR manually install theCSIDriver
CRD on the Kubernetes cluster with the following command:
$> kubectl create -f https://raw.githubusercontent.com/kubernetes/csi-api/master/pkg/crd/manifests/csidriver.yaml
Kubernetes v1.14+, uses the same Kubernetes feature flag, but because the feature is beta, it is enabled by default. And since the API type (as of beta) is built in to the Kubernetes API, installation of the CRD is no longer required.
CSINode Object
Status
Status | Min K8s Version | Max K8s Version |
---|---|---|
Alpha | 1.12 | 1.13 |
Beta | 1.14 | 1.16 |
GA | 1.17 | - |
What is the CSINode object?
CSI drivers generate node specific information. Instead of storing this in the Kubernetes Node
API Object, a new CSI specific Kubernetes CSINode
object was created.
It serves the following purposes:
- Mapping Kubernetes node name to CSI Node name,
- The CSI
GetNodeInfo
call returns the name by which the storage system refers to a node. Kubernetes must use this name in futureControllerPublishVolume
calls. Therefore, when a new CSI driver is registered, Kubernetes stores the storage system node ID in theCSINode
object for future reference.
- Driver availability
- A way for kubelet to communicate to the kube-controller-manager and kubernetes scheduler whether the driver is available (registered) on the node or not.
- Volume topology
- The CSI
GetNodeInfo
call returns a set of keys/values labels identifying the topology of that node. Kubernetes uses this information to do topology-aware provisioning (see PVC Volume Binding Modes for more details). It stores the key/values as labels on the Kubernetes node object. In order to recall whichNode
label keys belong to a specific CSI driver, the kubelet stores the keys in theCSINode
object for future reference.
What fields does the CSINode object have?
Here is an example of a v1 CSINode
object:
apiVersion: storage.k8s.io/v1
kind: CSINode
metadata:
name: node1
spec:
drivers:
- name: mycsidriver.example.com
nodeID: storageNodeID1
topologyKeys: ['mycsidriver.example.com/regions', "mycsidriver.example.com/zones"]
What the fields mean:
drivers
- list of CSI drivers running on the node and their properties.name
- the CSI driver that this object refers to.nodeID
- the assigned identifier for the node as determined by the driver.topologyKeys
- A list of topology keys assigned to the node as supported by the driver.
What creates the CSINode object?
CSI drivers do not need to create the CSINode
object directly. Kubelet manages the object when a CSI driver registers through the kubelet plugin registration mechanism. The node-driver-registrar sidecar container helps with this registration.
Changes from Alpha to Beta
CRD to Built in Type
The alpha object was called CSINodeInfo
, whereas the beta object is called
CSINode
. The alpha CSINodeInfo
object was also defined as a Custom Resource Definition (CRD). As part of the promotion to beta the object has been moved to the built-in Kubernetes API.
In the move from alpha to beta, the API Group for this object changed from csi.storage.k8s.io/v1alpha1
to storage.k8s.io/v1beta1
.
There is no automatic update of existing CRDs and their CRs during Kubernetes update to the new build-in type.
Enabling CSINodeInfo on Kubernetes
In Kubernetes v1.12 and v1.13, because the feature was alpha, it was disabled by default. To enable the use of CSINodeInfo
on these versions, do the following:
- Ensure the feature gate is enabled with
--feature-gates=CSINodeInfo=true
- Either ensure the
CSIDriver
CRD is automatically installed via the Kubernetes Storage CRD addon OR manually install theCSINodeInfo
CRD on the Kubernetes cluster with the following command:
$> kubectl create -f https://raw.githubusercontent.com/kubernetes/csi-api/master/pkg/crd/manifests/csinodeinfo.yaml
Kubernetes v1.14+, uses the same Kubernetes feature flag, but because the feature is beta, it is enabled by default. And since the API type (as of beta) is built in to the Kubernetes API, installation of the CRD is no longer required.
Features
The Kubernetes implementation of CSI has multiple sub-features. This section describes these sub-features, their status (although support for CSI in Kubernetes is GA/stable, support of sub-features moves independently so sub-features maybe alpha or beta), and how to integrate them in to your CSI Driver.
Secrets and Credentials
Some drivers may require a secret in order to complete operations.
CSI Driver Secrets
If a CSI Driver requires secrets for a backend (a service account, for example), and this secret is required at the "per driver" granularity (not different "per CSI operation" or "per volume"), then the secret SHOULD be injected directly in to CSI driver pods via standard Kubernetes secret distribution mechanisms during deployment.
CSI Operation Secrets
If a CSI Driver requires secrets "per CSI operation" or "per volume" or "per storage pool", the CSI spec allows secrets to be passed in for various CSI operations (including CreateVolumeRequest
, ControllerPublishVolumeRequest
, and more).
Cluster admins can populate such secrets by creating Kubernetes Secret
objects and specifying the keys in the StorageClass
or SnapshotClass
objects.
The CSI sidecar containers facilitate the handling of secrets between Kubernetes and the CSI Driver. For more details see:
Secret RBAC Rules
For reducing RBAC permissions as much as possible, secret rules are disabled in each sidecar repository by default.
Please add or update RBAC rules if secret is expected to use.
To set proper secret permission, uncomment related lines defined in rbac.yaml
(e.g. external-provisioner/deploy/kubernetes/rbac.yaml)
Handling Sensitive Information
CSI Drivers that accept secrets SHOULD handle this data carefully. It may contain sensitive information and MUST be treated as such (e.g. not logged).
To make it easier to handle secret fields (e.g. strip them from CSI protos when logging), the CSI spec defines a decorator (csi_secret
) on all fields containing sensitive information. Any fields decorated with csi_secret
MUST be treated as if they contain sensitive information (e.g. not logged, etc.).
The Kubernetes CSI development team also provides a GO lang package called protosanitizer
that CSI driver developers may be used to remove values for all fields in a gRPC messages decorated with csi_secret
. The library can be found in kubernetes-csi/csi-lib-utils/protosanitizer. The Kubernetes CSI Sidecar Containers and sample drivers use this library to ensure no sensitive information is logged.
StorageClass Secrets
The CSI external-provisioner sidecar container facilitates the handling of secrets for the following operations:
CreateVolumeRequest
DeleteVolumeRequest
ControllerPublishVolumeRequest
ControllerUnpublishVolumeRequest
ControllerExpandVolumeRequest
NodeStageVolumeRequest
NodePublishVolumeRequest
CSI external-provisioner
v1.0.1+ supports the following keys in StorageClass.parameters
:
csi.storage.k8s.io/provisioner-secret-name
csi.storage.k8s.io/provisioner-secret-namespace
csi.storage.k8s.io/controller-publish-secret-name
csi.storage.k8s.io/controller-publish-secret-namespace
csi.storage.k8s.io/node-stage-secret-name
csi.storage.k8s.io/node-stage-secret-namespace
csi.storage.k8s.io/node-publish-secret-name
csi.storage.k8s.io/node-publish-secret-namespace
CSI external-provisioner
v1.2.0+ adds support for the following keys in StorageClass.parameters
:
csi.storage.k8s.io/controller-expand-secret-name
csi.storage.k8s.io/controller-expand-secret-namespace
Cluster admins can populate the secret fields for the operations listed above with data from Kubernetes Secret
objects by specifying these keys in the StorageClass
object.
Examples
Basic Provisioning Secret
In this example, the external-provisioner will fetch Kubernetes Secret
object fast-storage-provision-key
in the namespace pd-ssd-credentials
and pass the credentials to the CSI driver named csi-driver.team.example.com
in the CreateVolume
CSI call.
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: fast-storage
provisioner: csi-driver.team.example.com
parameters:
type: pd-ssd
csi.storage.k8s.io/provisioner-secret-name: fast-storage-provision-key
csi.storage.k8s.io/provisioner-secret-namespace: pd-ssd-credentials
All volumes provisioned using this StorageClass
use the same secret.
Per Volume Secrets
In this example, the external-provisioner will generate the name of the Kubernetes Secret
object and namespace for the NodePublishVolume
CSI call, based on the PVC namespace and annotations, at volume provision time.
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: fast-storage
provisioner: csi-driver.team.example.com
parameters:
type: pd-ssd
csi.storage.k8s.io/node-publish-secret-name: ${pvc.annotations['team.example.com/key']}
csi.storage.k8s.io/node-publish-secret-namespace: ${pvc.namespace}
This StorageClass will result in the creation of a PersistentVolume
API object referencing a "node publish secret" in the same namespace as the PersistentVolumeClaim
that triggered the provisioning and with a name specified as an annotation on the PersistentVolumeClaim
. This could be used to give the creator of the PersistentVolumeClaim
the ability to specify a secret containing a decryption key they have control over.
Multiple Operation Secrets
A drivers may support secret keys for multiple operations. In this case, you can provide secrets references for each operation:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: fast-storage-all
provisioner: csi-driver.team.example.com
parameters:
type: pd-ssd
csi.storage.k8s.io/provisioner-secret-name: ${pvc.name}
csi.storage.k8s.io/provisioner-secret-namespace: ${pvc.namespace}-fast-storage
csi.storage.k8s.io/node-publish-secret-name: ${pvc.name}-${pvc.annotations['team.example.com/key']}
csi.storage.k8s.io/node-publish-secret-namespace: ${pvc.namespace}-fast-storage
Operations
Details for each secret supported by the external-provisioner can be found below.
Create/Delete Volume Secret
The CSI external-provisioner
(v1.0.1+) looks for the following keys in StorageClass.parameters
.
csi.storage.k8s.io/provisioner-secret-name
csi.storage.k8s.io/provisioner-secret-namespace
The values of both of these parameters, together, refer to the name and namespace of a Secret
object in the Kubernetes API.
If specified, the CSI external-provisioner
will attempt to fetch the secret before provisioning and deletion.
If the secret is retrieved successfully, the provisioner passes it to the CSI driver in the CreateVolumeRequest.secrets
or DeleteVolumeRequest.secrets
field.
If no such secret exists in the Kubernetes API, or the provisioner is unable to fetch it, the provision operation will fail.
Note, however, that the delete operation will continue even if the secret is not found (because, for example, the entire namespace containing the secret was deleted). In this case, if the driver requires a secret for deletion, then the volume and PV may need to be manually cleaned up.
The values of these parameters may be "templates". The external-provisioner
will automatically resolve templates at volume provision time, as detailed below:
csi.storage.k8s.io/provisioner-secret-name
${pv.name}
- Replaced with name of the
PersistentVolume
object being provisioned.
- Replaced with name of the
${pvc.namespace}
- Replaced with namespace of the
PersistentVolumeClaim
object that triggered provisioning. - Support added in CSI
external-provisioner
v1.2.0+
- Replaced with namespace of the
${pvc.name}
- Replaced with the name of the
PersistentVolumeClaim
object that triggered provisioning. - Support added in CSI
external-provisioner
v1.2.0+
- Replaced with the name of the
csi.storage.k8s.io/provisioner-secret-namespace
${pv.name}
- Replaced with name of the
PersistentVolume
object being provisioned.
- Replaced with name of the
${pvc.namespace}
- Replaced with namespace of the
PersistentVolumeClaim
object that triggered provisioning.
- Replaced with namespace of the
Controller Publish/Unpublish Secret
The CSI external-provisioner
(v1.0.1+) looks for the following keys in StorageClass.parameters
:
csi.storage.k8s.io/controller-publish-secret-name
csi.storage.k8s.io/controller-publish-secret-namespace
The values of both of these parameters, together, refer to the name and namespace of a Secret
object in the Kubernetes API.
If specified, the CSI external-provisioner
sets the CSIPersistentVolumeSource.ControllerPublishSecretRef
field in the new PersistentVolume
object to refer to this secret once provisioning is successful.
The CSI external-attacher
then attempts to fetch the secret referenced by the CSIPersistentVolumeSource.ControllerPublishSecretRef
, if specified, before an attach or detach operation.
If no such secret exists in the Kubernetes API, or the external-attacher
is unable to fetch it, the attach or detach operation fails.
If the secret is retrieved successfully, the external-attacher
passes it to the CSI driver in the ControllerPublishVolumeRequest.secrets
or ControllerUnpublishVolumeRequest.secrets
field.
The values of these parameters may be "templates". The external-provisioner
will automatically resolve templates at volume provision time, as detailed below:
csi.storage.k8s.io/controller-publish-secret-name
${pv.name}
- Replaced with name of the
PersistentVolume
object being provisioned.
- Replaced with name of the
${pvc.namespace}
- Replaced with namespace of the
PersistentVolumeClaim
object that triggered provisioning.
- Replaced with namespace of the
${pvc.name}
- Replaced with the name of the
PersistentVolumeClaim
object that triggered provisioning.
- Replaced with the name of the
${pvc.annotations['<ANNOTATION_KEY>']}
(e.g.${pvc.annotations['example.com/key']}
)- Replaced with the value of the specified annotation from the
PersistentVolumeClaim
object that triggered provisioning
- Replaced with the value of the specified annotation from the
csi.storage.k8s.io/controller-publish-secret-namespace
${pv.name}
- Replaced with name of the
PersistentVolume
object being provisioned.
- Replaced with name of the
${pvc.namespace}
- Replaced with namespace of the
PersistentVolumeClaim
object that triggered provisioning.
- Replaced with namespace of the
Node Stage Secret
The CSI external-provisioner
(v1.0.1+) looks for the following keys in StorageClass.parameters
:
csi.storage.k8s.io/node-stage-secret-name
csi.storage.k8s.io/node-stage-secret-namespace
The value of both parameters, together, refer to the name and namespace of the Secret
object in the Kubernetes API.
If specified, the CSI external-provisioner
sets the CSIPersistentVolumeSource.NodeStageSecretRef
field in the new PersistentVolume
object to refer to this secret once provisioning is successful.
The Kubernetes kubelet then attempts to fetch the secret referenced by the CSIPersistentVolumeSource.NodeStageSecretRef
field, if specified, before a mount device operation.
If no such secret exists in the Kubernetes API, or the kubelet is unable to fetch it, the mount device operation fails.
If the secret is retrieved successfully, the kubelet passes it to the CSI driver in the NodeStageVolumeRequest.secrets
field.
The values of these parameters may be "templates". The external-provisioner
will automatically resolve templates at volume provision time, as detailed below:
csi.storage.k8s.io/node-stage-secret-name
${pv.name}
- Replaced with name of the
PersistentVolume
object being provisioned.
- Replaced with name of the
${pvc.namespace}
- Replaced with namespace of the
PersistentVolumeClaim
object that triggered provisioning.
- Replaced with namespace of the
${pvc.name}
- Replaced with the name of the
PersistentVolumeClaim
object that triggered provisioning.
- Replaced with the name of the
${pvc.annotations['<ANNOTATION_KEY>']}
(e.g.${pvc.annotations['example.com/key']}
)- Replaced with the value of the specified annotation from the
PersistentVolumeClaim
object that triggered provisioning
- Replaced with the value of the specified annotation from the
csi.storage.k8s.io/node-stage-secret-namespace
${pv.name}
- Replaced with name of the
PersistentVolume
object being provisioned.
- Replaced with name of the
${pvc.namespace}
- Replaced with namespace of the
PersistentVolumeClaim
object that triggered provisioning.
- Replaced with namespace of the
Node Publish Secret
The CSI external-provisioner
(v1.0.1+) looks for the following keys in StorageClass.parameters
:
csi.storage.k8s.io/node-publish-secret-name
csi.storage.k8s.io/node-publish-secret-namespace
The value of both parameters, together, refer to the name and namespace of the Secret
object in the Kubernetes API.
If specified, the CSI external-provisioner
sets the CSIPersistentVolumeSource.NodePublishSecretRef
field in the new PersistentVolume
object to refer to this secret once provisioning is successful.
The Kubernetes kubelet, attempts to fetch the secret referenced by the CSIPersistentVolumeSource.NodePublishSecretRef
field, if specified, before a mount operation.
If no such secret exists in the Kubernetes API, or the kubelet is unable to fetch it, the mount operation fails.
If the secret is retrieved successfully, the kubelet passes it to the CSI driver in the NodePublishVolumeRequest.secrets
field.
The values of these parameters may be "templates". The external-provisioner
will automatically resolve templates at volume provision time, as detailed below:
csi.storage.k8s.io/node-publish-secret-name
${pv.name}
- Replaced with name of the
PersistentVolume
object being provisioned.
- Replaced with name of the
${pvc.namespace}
- Replaced with namespace of the
PersistentVolumeClaim
object that triggered provisioning.
- Replaced with namespace of the
${pvc.name}
- Replaced with the name of the
PersistentVolumeClaim
object that triggered provisioning.
- Replaced with the name of the
${pvc.annotations['<ANNOTATION_KEY>']}
(e.g.${pvc.annotations['example.com/key']}
)- Replaced with the value of the specified annotation from the
PersistentVolumeClaim
object that triggered provisioning
- Replaced with the value of the specified annotation from the
csi.storage.k8s.io/node-publish-secret-namespace
${pv.name}
- Replaced with name of the
PersistentVolume
object being provisioned.
- Replaced with name of the
${pvc.namespace}
- Replaced with namespace of the
PersistentVolumeClaim
object that triggered provisioning.
- Replaced with namespace of the
Controller Expand (Volume Resize) Secret
The CSI external-provisioner
(v1.2.0+) looks for the following keys in StorageClass.parameters
:
csi.storage.k8s.io/controller-expand-secret-name
csi.storage.k8s.io/controller-expand-secret-namespace
The value of both parameters, together, refer to the name and namespace of the Secret
object in the Kubernetes API.
If specified, the CSI external-provisioner
sets the CSIPersistentVolumeSource.ControllerExpandSecretRef
field in the new PersistentVolume
object to refer to this secret once provisioning is successful.
The external-resizer
(v0.2.0+), attempts to fetch the secret referenced by the CSIPersistentVolumeSource.ControllerExpandSecretRef
field, if specified, before starting a volume resize (expand) operation.
If no such secret exists in the Kubernetes API, or the external-resizer
is unable to fetch it, the resize (expand) operation fails.
If the secret is retrieved successfully, the external-resizer
passes it to the CSI driver in the ControllerExpandVolumeRequest.secrets
field.
The values of these parameters may be "templates". The external-provisioner
will automatically resolve templates at volume provision time, as detailed below:
csi.storage.k8s.io/controller-expand-secret-name
${pv.name}
- Replaced with name of the
PersistentVolume
object being provisioned.
- Replaced with name of the
${pvc.namespace}
- Replaced with namespace of the
PersistentVolumeClaim
object that triggered provisioning.
- Replaced with namespace of the
${pvc.name}
- Replaced with the name of the
PersistentVolumeClaim
object that triggered provisioning.
- Replaced with the name of the
${pvc.annotations['<ANNOTATION_KEY>']}
(e.g.${pvc.annotations['example.com/key']}
)- Replaced with the value of the specified annotation from the
PersistentVolumeClaim
object that triggered provisioning
- Replaced with the value of the specified annotation from the
csi.storage.k8s.io/controller-expand-secret-namespace
${pv.name}
- Replaced with name of the
PersistentVolume
object being provisioned.
- Replaced with name of the
${pvc.namespace}
- Replaced with namespace of the
PersistentVolumeClaim
object that triggered provisioning.
- Replaced with namespace of the
VolumeSnapshotClass Secrets
The CSI external-snapshotter sidecar container facilitates the handling of secrets for the following operations:
CreateSnapshotRequest
DeleteSnapshotRequest
CSI external-snapshotter
v1.0.1+ supports the following keys in VolumeSnapshotClass.parameters
:
csi.storage.k8s.io/snapshotter-secret-name
csi.storage.k8s.io/snapshotter-secret-namespace
Cluster admins can populate the secret fields for the operations listed above with data from Kubernetes Secret
objects by specifying these keys in the VolumeSnapshotClass
object.
Operations
Details for each secret supported by the external-snapshotter can be found below.
Create/Delete VolumeSnapshot Secret
CSI external-snapshotter
v1.0.1+ looks for the following keys in VolumeSnapshotClass.parameters
:
csi.storage.k8s.io/snapshotter-secret-name
csi.storage.k8s.io/snapshotter-secret-namespace
The values of both of these parameters, together, refer to the name and namespace of a Secret
object in the Kubernetes API.
If specified, the CSI external-snapshotter
will attempt to fetch the secret before creation and deletion.
If the secret is retrieved successfully, the snapshotter passes it to the CSI driver in the CreateSnapshotRequest.secrets
or DeleteSnapshotRequest.secrets
field.
If no such secret exists in the Kubernetes API, or the snapshotter is unable to fetch it, the create operation will fail.
Note, however, that the delete operation will continue even if the secret is not found (because, for example, the entire namespace containing the secret was deleted). In this case, if the driver requires a secret for deletion, then the volume and PV may need to be manually cleaned up.
The values of these parameters may be "templates". The external-snapshotter
will automatically resolve templates at snapshot create time, as detailed below:
csi.storage.k8s.io/snapshotter-secret-name
${volumesnapshotcontent.name}
- Replaced with name of the
VolumeSnapshotContent
object being created.
- Replaced with name of the
${volumesnapshot.namespace}
- Replaced with namespace of the
VolumeSnapshot
object that triggered creation.
- Replaced with namespace of the
${volumesnapshot.name}
- Replaced with the name of the
VolumeSnapshot
object that triggered creation.
- Replaced with the name of the
csi.storage.k8s.io/snapshotter-secret-namespace
${volumesnapshotcontent.name}
- Replaced with name of the
VolumeSnapshotContent
object being created.
- Replaced with name of the
${volumesnapshot.namespace}
- Replaced with namespace of the
VolumeSnapshot
object that triggered creation.
- Replaced with namespace of the
CSI Topology Feature
Status
Status | Min K8s Version | Max K8s Version | external-provisioner Version |
---|---|---|---|
Alpha | 1.12 | 1.12 | 0.4 |
Alpha | 1.13 | 1.13 | 1.0 |
Beta | 1.14 | 1.16 | 1.1-1.4 |
GA | 1.17 | - | 1.5+ |
Overview
Some storage systems expose volumes that are not equally accessible by all nodes in a Kubernetes cluster. Instead volumes may be constrained to some subset of node(s) in the cluster. The cluster may be segmented into, for example, “racks” or “regions” and “zones” or some other grouping, and a given volume may be accessible only from one of those groups.
To enable orchestration systems, like Kubernetes, to work well with storage systems which expose volumes that are not equally accessible by all nodes, the CSI spec enables:
- Ability for a CSI Driver to opaquely specify where a particular node exists (e.g. "node A" is in "zone 1").
- Ability for Kubernetes (users or components) to influence where a volume is provisioned (e.g. provision new volume in either "zone 1" or "zone 2").
- Ability for a CSI Driver to opaquely specify where a particular volume exists (e.g. "volume X" is accessible by all nodes in "zone 1" and "zone 2").
Kubernetes and the external-provisioner use these abilities to make intelligent scheduling and provisioning decisions (that Kubernetes can both influence and act on topology information for each volume),
Implementing Topology in your CSI Driver
To support topology in a CSI driver, the following must be implemented:
- The
PluginCapability
must supportVOLUME_ACCESSIBILITY_CONSTRAINTS
. - The plugin must fill in
accessible_topology
inNodeGetInfoResponse
. This information will be used to populate the Kubernetes CSINode object and add the topology labels to the Node object. - During
CreateVolume
, the topology information will get passed in throughCreateVolumeRequest.accessibility_requirements
.
In the StorageClass object, both volumeBindingMode
values of Immediate
and
WaitForFirstConsumer
are supported.
- If
Immediate
is set, then the external-provisioner will pass in all available topologies in the cluster for the driver. - If
WaitForFirstConsumer
is set, then the external-provisioner will wait for the scheduler to pick a node. The topology of that selected node will then be set as the first entry inCreateVolumeRequest.accessibility_requirements.preferred
. All remaining topologies are still included in therequisite
andpreferred
fields to support storage systems that span across multiple topologies.
Sidecar Deployment
The topology feature requires the external-provisioner sidecar with the Topology feature gate enabled:
--feature-gates=Topology=true
Kubernetes Cluster Setup
Beta
In the Kubernetes cluster the CSINodeInfo
feature must be enabled on both Kubernetes master and nodes (refer to the CSINode Object section for more info):
--feature-gates=CSINodeInfo=true
In order to fully function properly, all Kubernetes master and nodes must be on at least
Kubernetes 1.14. If a selected node is on a lower version, topology is ignored and not
passed to the driver during CreateVolume
.
Alpha
The alpha feature in the external-provisioner is not compatible across Kubernetes versions. In addition, Kubernetes master and node version skew and upgrades are not supported.
The CSINodeInfo
, VolumeScheduling
, and KubeletPluginsWatcher
feature gates
must be enabled on both Kubernetes master and nodes.
The CSINodeInfo CRDs also have to be manually installed in the cluster.
Storage Internal Topology
Note that a storage system may also have an "internal topology" different from (independent of) the topology of the cluster where workloads are scheduled. Meaning volumes exposed by the storage system are equally accessible by all nodes in the Kubernetes cluster, but the storage system has some internal topology that may influence, for example, the performance of a volume from a given node.
CSI does not currently expose a first class mechanism to influence such storage system internal topology on provisioning. Therefore, Kubernetes can not programmatically influence such topology. However, a CSI Driver may expose the ability to specify internal storage topology during volume provisioning using an opaque parameter in the CreateVolume
CSI call (CSI enables CSI Drivers to expose an arbitrary set of configuration options during dynamic provisioning by allowing opaque parameters to be passed from cluster admins to the storage plugins) -- this would enable cluster admins to be able to control the storage system internal topology during provisioning.
# Raw Block Volume Feature
Status
Status | Min K8s Version | Max K8s Version | external-provisioner Version | external-attacher Version |
---|---|---|---|---|
Alpha | 1.11 | 1.13 | 0.4 | 0.4 |
Alpha | 1.13 | 1.13 | 1.0 | 1.0 |
Beta | 1.14 | 1.17 | 1.1+ | 1.1+ |
GA | 1.18 | - | 1.1+ | 1.1+ |
Overview
This page documents how to implement raw block volume support to a CSI Driver.
A block volume is a volume that will appear as a block device inside the container. A mounted (file) volume is volume that will be mounted using a specified file system and appear as a directory inside the container.
The CSI spec supports both block and mounted (file) volumes.
Implementing Raw Block Volume Support in Your CSI Driver
CSI doesn't provide a capability query for block volumes, so COs will simply pass through requests for
block volume creation to CSI plugins, and plugins are allowed to fail with the InvalidArgument
GRPC
error code if they don't support block volumes. Kubernetes doesn't make any assumptions about which CSI
plugins support blocks and which don't, so users have to know if any given storage class is capable of
creating block volumes.
The difference between a request for a mounted (file) volume and a block volume is the VolumeCapabilities
field of the request. Note that this field is an array and the created volume must support ALL of the
capabilities requested, or else return an error. If the AccessType
method of a VolumeCapability
is
VolumeCapability_Block
, then the capability is requesting a raw block volume. Unlike mount volumes, block
volumes don't have any specific capabilities that need to be validated, although access modes still
apply.
Block volumes are much more likely to support multi-node flavors of VolumeCapability_AccessMode_Mode
than mount volumes, because there's no file system state stored on the node side that creates any technical
impediments to multi-attaching block volumes. While there may still be good reasons to prevent
multi-attaching block volumes, and there may be implementations that are not capable of supporting
multi-attach, you should think carefully about what makes sense for your driver.
CSI plugins that support both mount and block volumes must be sure to check the capabilities of all CSI RPC requests and ensure that the capability of the request matches the capability of the volume, to avoid trying to do file-system-related things to block volumes and block-related things to file system volumes. The following RPCs specify capabilities that must be validated:
CreateVolume()
(multiple capabilities)ControllerPublishVolume()
ValidateVolumeCapabilities()
(multiple capabilities)GetCapacity()
(see below)NodeStageVolume()
NodePublishVolume()
Also, CSI plugins that implement the optional GetCapacity()
RPC should note that that RPC includes
capabilities too, and if the capacity for mount volumes is not the same as the capacity for block
volumes, that needs to be handled in the implementation of that RPC.
Q: Can CSI plugins support only block volumes and not mount volumes?
A: Yes! This is just the reverse case of supporting mount volumes only. Plugins may return InvalidArgument
for any creation request with an AccessType
of VolumeCapability_Mount
.
Differences Between Block and Mount Volumes
The main difference between block volumes and mount volumes is the expected result of the NodePublish()
.
For mount volumes, the CO expects the result to be a mounted directory, at TargetPath
. For block volumes,
the CO expects there to be a device file at TargetPath
. The device file can be a bind-mounted device from
the hosts /dev
file system, or it can be a device node created at that location using mknod()
.
It's desirable but not required to expose an unfiltered device node. For example, CSI plugins based on technologies that implement SCSI protocols should expect that pods consuming the block volumes they create may want to send SCSI commands to the device. This is something that should "just work" by default (subject to container capabilities) so CSI plugins should avoid anything that would break this kind of use case. The only hard requirement is that the device implements block reading/writing however.
For plugins with the RPC_STAGE_UNSTAGE_VOLUME
capability, the CO doesn't care exactly what is placed at
the StagingTargetPath
, but it's worth noting that some CSI RPCs are allowed to pass the plugin either
a staging path or a publish path, so it's important to think carefully about how NodeStageVolume()
is
implemented, knowing that either path could get used by the CO to refer to the volume later on. This is
made more challenging because the CSI spec says that StagingTargetPath
is always a directory even for
block volumes.
Sidecar Deployment
The raw block feature requires the external-provisioner and external-attacher sidecars to be deployed.
Kubernetes Cluster Setup
The BlockVolume
and CSIBlockVolume
feature gates need to be enabled on
all Kubernetes masters and nodes.
--feature-gates=BlockVolume=true,CSIBlockVolume=true...
- TODO: detail how Kubernetes API raw block fields get mapped to CSI methods/fields.
Skip Kubernetes Attach and Detach
Status
Status | Min K8s Version | Max K8s Version | cluster-driver-registrar Version |
---|---|---|---|
Alpha | 1.12 | 1.12 | 0.4 |
Alpha | 1.13 | 1.13 | 1.0 |
Beta | 1.14 | 1.17 | n/a |
GA | 1.18 | - | n/a |
Overview
Volume drivers, like NFS, for example, have no concept of an attach (ControllerPublishVolume
). However, Kubernetes always executes Attach
and Detach
operations even if the CSI driver does not implement an attach operation (i.e. even if the CSI Driver does not implement a ControllerPublishVolume
call).
This was problematic because it meant all CSI drivers had to handle Kubernetes attachment. CSI Drivers that did not implement the PUBLISH_UNPUBLISH_VOLUME
controller capability could work around this by deploying an external-attacher and the external-attacher
would responds to Kubernetes attach operations and simply do a noop (because the CSI driver did not advertise the PUBLISH_UNPUBLISH_VOLUME
controller capability).
Although the workaround works, it adds an unnecessary operation (round-trip) in the preparation of a volume for a container, and requires CSI Drivers to deploy an unnecessary sidecar container (external-attacher
).
Skip Attach with CSI Driver Object
The CSIDriver Object enables CSI Drivers to specify how Kubernetes should interact with it.
Specifically the attachRequired
field instructs Kubernetes to skip any attach operation altogether.
For example, the existence of the following object would cause Kubernetes to skip attach operations for all CSI Driver testcsidriver.example.com
volumes.
apiVersion: storage.k8s.io/v1
kind: CSIDriver
metadata:
name: testcsidriver.example.com
spec:
attachRequired: false
CSIDriver object should be manually included in the driver deployment manifests.
Previously, the cluster-driver-registrar sidecar container could be deployed to automatically create the object. Once the flags to this container are configured correctly, it will automatically create a CSIDriver Object when it starts with the correct fields set.
Alpha Functionality
In alpha, this feature was enabled via the CSIDriver Object CRD.
apiVersion: csi.storage.k8s.io/v1alpha1
kind: CSIDriver
metadata:
....
Pod Info on Mount
Status
Status | Min K8s Version | Max K8s Version | cluster-driver-registrar Version |
---|---|---|---|
Alpha | 1.12 | 1.12 | 0.4 |
Alpha | 1.13 | 1.13 | 1.0 |
Beta | 1.14 | 1.17 | n/a |
GA | 1.18 | - | n/a |
Overview
CSI avoids encoding Kubernetes specific information in to the specification, since it aims to support multiple orchestration systems (beyond just Kubernetes).
This can be problematic because some CSI drivers require information about the workload (e.g. which pod is referencing this volume), and CSI does not provide this information natively to drivers.
Pod Info on Mount with CSI Driver Object
The CSIDriver Object enables CSI Drivers to specify how Kubernetes should interact with it.
Specifically the podInfoOnMount
field instructs Kubernetes that the CSI driver requires additional pod information (like podName, podUID, etc.) during mount operations.
For example, the existence of the following object would cause Kubernetes to add pod information at mount time to the NodePublishVolumeRequest.volume_context
map.
apiVersion: storage.k8s.io/v1
kind: CSIDriver
metadata:
name: testcsidriver.example.com
spec:
podInfoOnMount: true
If the podInfoOnMount
field is set to true
, during mount, Kubelet will add the following key/values to the volume_context
field in the CSI NodePublishVolumeRequest
:
csi.storage.k8s.io/pod.name: {pod.Name}
csi.storage.k8s.io/pod.namespace: {pod.Namespace}
csi.storage.k8s.io/pod.uid: {pod.UID}
csi.storage.k8s.io/serviceAccount.name: {pod.Spec.ServiceAccountName}
The CSIDriver object should be manually included in the driver manifests.
Previously, the cluster-driver-registrar sidecar container could be used to create the object. Once the flags to this container are configured correctly, it will automatically create a CSIDriver Object when it starts with the correct fields set.
Alpha Functionality
In alpha, this feature was enabled by setting the podInfoOnMountVersion
field in the CSIDriver
Object CRD to v1
.
apiVersion: csi.storage.k8s.io/v1alpha1
kind: CSIDriver
metadata:
name: testcsidriver.example.com
spec:
podInfoOnMountVersion: v1
Volume Expansion
Status
Status | Min K8s Version | Max K8s Version | external-resizer Version |
---|---|---|---|
Alpha | 1.14 | 1.15 | 0.2 |
Beta | 1.16 | - | 0.3 |
Overview
A storage provider that allows volume expansion after creation, may choose to implement volume expansion either via a control-plane CSI RPC call or via node CSI RPC call or both as a two step process.
Implementing Volume expansion functionality
To implement volume expansion the CSI driver MUST:
- Implement
VolumeExpansion
plugin capability. - Implement
EXPAND_VOLUME
controller capability or implementEXPAND_VOLUME
node capability or both.
ControllerExpandVolume
RPC call can be made when volume is ONLINE
or OFFLINE
depending on VolumeExpansion
plugin
capability. Where ONLINE
and OFFLINE
means:
- ONLINE : Volume is currently published or available on a node.
- OFFLINE : Volume is currently not published or available on a node.
NodeExpandVolume
RPC call on the other hand - always requires volume to be published or staged on a node (and hence ONLINE
).
For block storage file systems - NodeExpandVolume
is typically used for expanding the file system on the node, but it can be also
used to perform other volume expansion related housekeeping operations on the node.
For details, see the CSI spec.
Deploying volume expansion functionality
The Kubernetes CSI development team maintains external-resizer Kubernetes CSI Sidecar Containers.
This sidecar container implements the logic for watching the Kubernetes API for Persistent Volume claim edits and issuing ControllerExpandVolume
RPC call against a CSI endpoint and updating PersistentVolume
object to reflect new size.
This sidecar is needed even if CSI driver does not have EXPAND_VOLUME
controller capability, in this case it performs a NO-OP expansion and updates PersistentVolume
object. NodeExpandVolume
is always called by Kubelet on the node.
For more details, see external-resizer.
Enabling Volume expansion for CSI volumes in Kubernetes
To expand a volume if permitted by the storage class, users just need to edit the persistent volume claim object and request more storage.
In Kubernetes 1.14 and 1.15, this feature was in alpha status and required enabling the following feature gate:
--feature-gates=ExpandCSIVolumes=true
Also in Kubernetes 1.14 and 1.15, online expansion had to be enabled explicitly:
--feature-gates=ExpandInUsePersistentVolumes=true
external-resizer and kubelet add appropriate events and conditions to persistent volume claim objects indicating progress of volume expansion operations.
Kubernetes PVC DataSource (CSI VolumeContentSource)
When creating a new PersistentVolumeClaim, the Kubernetes API provides a PersistentVolumeClaim.DataSource
parameter. This parameter is used to specify the CSI CreateVolumeRequest.VolumeContentSource
option for CSI Provisioners. The VolumeContentSource
parameter instructs the CSI plugin to pre-populate the volume being provisioned with data from the specified source.
External Provisioner Responsibilities
If a DataSource
is specified in the CreateVolume
call to the CSI external provisioner, the external provisioner will fetch the specified resource and pass the appropriate object id to the plugin.
Supported DataSources
Currently there are two types of PersistentVolumeClaim.DataSource
objects that are supported:
Volume Cloning
Status and Releases
Status | Min k8s Version | Max k8s version | external-provisioner Version |
---|---|---|---|
Alpha | 1.15 | 1.15 | 1.3 |
Beta | 1.16 | 1.17 | 1.4 |
GA | 1.18 | - | 1.6 |
Overview
A Clone is defined as a duplicate of an existing Kubernetes Volume. For more information on cloning in Kubernetes see the concepts doc for Volume Cloning. A storage provider that allows volume cloning as a create feature, may choose to implement volume cloning via a control-plan CSI RPC call.
For details regarding the kubernetes API for volume cloning, please see kubernetes concepts.
Implementing Volume cloning functionality
To implement volume cloning the CSI driver MUST:
- Implement checks for
csi.CreateVolumeRequest.VolumeContentSource
in the plugin'sCreateVolume
function implementation. - Implement
CLONE_VOLUME
controller capability.
It is the responsibility of the storage plugin to either implement an expansion after clone if a provision request size is greater than the source, or allow the external-resizer to handle it. In the case that the plugin does not support resize capability and it does not have the capability to create a clone that is greater in size than the specified source volume, then the provision request should result in a failure.
Deploying volume clone functionality
The Kubernetes CSI development team maintains the external-provisioner which is responsible for detecting requests for a PVC DataSource and providing that information to the plugin via the csi.CreateVolumeRequest
. It's up to the plugin to check the csi.CreateVolumeRequest
for a VolumeContentSource
entry in the CreateVolumeRequest object.
There are no additional side-cars or add on components required.
Enabling Cloning for CSI volumes in Kubernetes
Volume cloning was promoted to Beta in version 1.16 and GA in 1.18, and as such is enabled by defult for kubernetes versions >= 1.16
In Kubernetes 1.15 this feature was alpha status and required enabling the appropriate feature gate:
--feature-gates=VolumePVCDataSource=true
Example implementation
A trivial example implementation can be found in the csi-hostpath plugin in its implementation of CreateVolume
.
Snapshot & Restore Feature
Status
Status | Min K8s Version | Max K8s Version | snapshot-controller Version | snapshot-validation-webhook Version | CSI external-snapshotter sidecar Version | external-provisioner Version |
---|---|---|---|---|---|---|
Alpha | 1.12 | 1.12 | 0.4.0 <= version < 1.0 | 0.4.1 <= version < 1.0 | ||
Alpha | 1.13 | 1.16 | 1.0.1 <= version < 2.0 | 1.0.1 <= version < 1.5 | ||
Beta | 1.17 | - | 2.0+ | 3.0+ | 2.0+ | 1.5+ |
IMPORTANT: The validation logic for VolumeSnapshots and VolumeSnapshotContents has been replaced by CEL validation rules. The validating webhook is now only being used for VolumeSnapshotClasses to ensure that there's at most one default class per CSI Driver. The validation webhook is deprecated and will be removed in the next release.
Overview
Many storage systems provide the ability to create a "snapshot" of a persistent volume. A snapshot represents a point-in-time copy of a volume. A snapshot can be used either to provision a new volume (pre-populated with the snapshot data) or to restore the existing volume to a previous state (represented by the snapshot).
Kubernetes CSI currently enables CSI Drivers to expose the following functionality via the Kubernetes API:
- Creation and deletion of volume snapshots via Kubernetes native API.
- Creation of new volumes pre-populated with the data from a snapshot via Kubernetes dynamic volume provisioning.
Note: Documentation under https://kubernetes.io/docs is for the latest Kubernetes release. Documentation for earlier releases are stored in different location. For example, this is the documentation location for v1.16.
Implementing Snapshot & Restore Functionality in Your CSI Driver
To implement the snapshot feature, a CSI driver MUST:
- Implement the
CREATE_DELETE_SNAPSHOT
and, optionally, theLIST_SNAPSHOTS
controller capabilities - Implement
CreateSnapshot
,DeleteSnapshot
, and, optionally, theListSnapshots
, controller RPCs.
For details, see the CSI spec.
Sidecar Deployment
The Kubernetes CSI development team maintains the external-snapshotter Kubernetes CSI Sidecar Containers. This sidecar container implements the logic for watching the Kubernetes API objects and issuing the appropriate CSI snapshot calls against a CSI endpoint. For more details, see external-snapshotter documentation.
Snapshot Beta
Snapshot APIs
With the promotion of Volume Snapshot to beta, the feature is now enabled by default on standard Kubernetes deployments instead of being opt-in. This involves a revamp of volume snapshot APIs.
The schema definition for the custom resources (CRs) can be found here. The CRDs are no longer automatically deployed by the sidecar. They should be installed by the Kubernetes distributions.
Highlights in the snapshot v1beta1 APIs
- DeletionPolicy is a required field in both VolumeSnapshotClass and VolumeSnapshotContent. This way the user has to explicitly specify it, leaving no room for confusion.
- VolumeSnapshotSpec has a required Source field. Source may be either a PersistentVolumeClaimName (if dynamically provisioning a snapshot) or VolumeSnapshotContentName (if pre-provisioning a snapshot).
- VolumeSnapshotContentSpec has a required Source field. This Source may be either a VolumeHandle (if dynamically provisioning a snapshot) or a SnapshotHandle (if pre-provisioning volume snapshots).
- VolumeSnapshot contains a Status to indicate the current state of the volume snapshot. It has a field BoundVolumeSnapshotContentName to indicate the VolumeSnapshot object is bound to a VolumeSnapshotContent.
- VolumeSnapshotContent contains a Status to indicate the current state of the volume snapshot content. It has a field SnapshotHandle to indicate that the VolumeSnapshotContent represents a snapshot on the storage system.
Controller Split
- The CSI external-snapshotter sidecar is split into two controllers, a snapshot controller and a CSI external-snapshotter sidecar.
The snapshot controller is deployed by the Kubernetes distributions and is responsible for watching the VolumeSnapshot CRD objects and manges the creation and deletion lifecycle of snapshots.
The CSI external-snapshotter sidecar watches Kubernetes VolumeSnapshotContent CRD objects and triggers CreateSnapshot/DeleteSnapshot against a CSI endpoint.
Snapshot Validation Webhook
There is a new validating webhook server which provides tightened validation on snapshot objects. This SHOULD be installed by the Kubernetes distros along with the snapshot-controller, not end users. It SHOULD be installed in all Kubernetes clusters that has the snapshot feature enabled. See Snapshot Validation Webhook for more details on how to use the webhook.
Kubernetes Cluster Setup
Volume snapshot is promoted to beta in Kubernetes 1.17 so the VolumeSnapshotDataSource
feature gate is enabled by default.
See the Deployment section of Snapshot Controller on how to set up the snapshot controller and CRDs.
See the Deployment section of Snapshot Validation Webhook for more details on how to use the webhook.
Test Snapshot Feature
To test snapshot Beta version, use the following example yaml files.
Create a StorageClass:
kubectl create -f storageclass.yaml
Create a PVC:
kubectl create -f pvc.yaml
Create a VolumeSnapshotClass:
kubectl create -f snapshotclass-v1.yaml
Create a VolumeSnapshot:
kubectl create -f snapshot-v1.yaml
Create a PVC from a VolumeSnapshot:
kubectl create -f restore.yaml
Snapshot Alpha
Snapshot APIs
Similar to the API for managing Kubernetes Persistent Volumes, the Kubernetes Volume Snapshots introduce three new API objects for managing snapshots: VolumeSnapshot
, VolumeSnapshotContent
, and VolumeSnapshotClass
. See Kubernetes Snapshot documentation for more details.
Unlike the core Kubernetes Persistent Volume objects, these Snapshot objects are defined as Custom Resource Definitions (CRDs). This is because the Kubernetes project is moving away from having resource types pre-defined in the API server. This allows the API server to be reused for projects other than Kubernetes, and consumers (like Kubernetes) simply install the resource types they require as CRDs. Because the Snapshot API types are not built in to Kubernetes, they must be installed prior to use.
The CRDs are automatically deployed by the CSI external-snapshotter sidecar. See Alpha section of the sidecar doc here.
The schema definition for the custom resources (CRs) can be found here.
In addition to these new CRD objects, a new, alpha DataSource
field has been added to the PersistentVolumeClaim
object. This new field enables dynamic provisioning of new volumes that are automatically pre-populated with data from an existing snapshot.
Kubernetes Cluster Setup
Since volume snapshot is an alpha feature in Kubernetes v1.12 to v1.16, you need to enable a new alpha feature gate called VolumeSnapshotDataSource
in the Kubernetes master.
--feature-gates=VolumeSnapshotDataSource=true
Test Snapshot Feature
To test snapshot Alpha version, use the following example yaml files.
Create a StorageClass:
kubectl create -f storageclass.yaml
Create a PVC:
kubectl create -f pvc.yaml
Create a VolumeSnapshotClass:
kubectl create -f snapshotclass.yaml
Create a VolumeSnapshot:
kubectl create -f snapshot.yaml
Create a PVC from a VolumeSnapshot:
kubectl create -f restore.yaml
PersistentVolumeClaim not Bound
If a PersistentVolumeClaim
is not bound, the attempt to create a volume snapshot from that PersistentVolumeClaim
will fail. No retries will be attempted. An event will be logged to indicate that the PersistentVolumeClaim
is not bound.
Note that this could happen if the PersistentVolumeClaim
spec and the VolumeSnapshot
spec are in the same YAML file. In this case, when the VolumeSnapshot
object is created, the PersistentVolumeClaim
object is created but volume creation is not complete and therefore the PersistentVolumeClaim
is not yet bound. You must wait until the PersistentVolumeClaim
is bound and then create the snapshot.
Examples
See the Drivers for a list of CSI drivers that implement the snapshot feature.
Volume Group Snapshot Feature
Status
Status | Min K8s Version | Max K8s Version | snapshot-controller Version | snapshot-validation-webhook Version | CSI external-snapshotter sidecar Version | external-provisioner Version |
---|---|---|---|---|---|---|
Alpha | 1.27 | - | 7.0+ | 7.0+ | 7.0+ | 4.0+ |
IMPORTANT: The validation logic for VolumeGroupSnapshots and VolumeGroupSnapshotContents has been replaced by CEL validation rules. The validating webhook is now only being used for VolumeGroupSnapshotClasses to ensure that there's at most one default class per CSI Driver. The validation webhook is deprecated and will be removed in the next release
Overview
Some storage systems provide the ability to create a crash consistent snapshot of multiple volumes. A group snapshot represents “copies” from multiple volumes that are taken at the same point-in-time. A group snapshot can be used either to rehydrate new volumes (pre-populated with the snapshot data) or to restore existing volumes to a previous state (represented by the snapshots).
Kubernetes CSI currently enables CSI Drivers to expose the following functionality via the Kubernetes API:
- Creation and deletion of volume group snapshots via Kubernetes native API.
- Creation of new volumes pre-populated with the data from a snapshot that is part of the volume group snapshot via Kubernetes dynamic volume provisioning.
Implementing Volume Group Snapshot Functionality in your CSI Driver
To implement the volume group snapshot feature, a CSI driver MUST:
- Implement a new group controller service.
- Implement group controller RPCs:
CreateVolumeGroupSnapshot
,DeleteVolumeGroupSnapshot
, andGetVolumeGroupSnapshot
. - Add group controller capability
CREATE_DELETE_GET_VOLUME_GROUP_SNAPSHOT
.
For details, see the CSI spec.
Sidecar Deployment
The Kubernetes CSI development team maintains the external-snapshotter Kubernetes CSI Sidecar Containers. This sidecar container implements the logic for watching the Kubernetes API objects and issuing the appropriate CSI volume group snapshot calls against a CSI endpoint. For more details, see external-snapshotter documentation.
Volume Group Snapshot APIs
With the introduction of Volume Group Snapshot, the user can create and delete a group snapshot using Kubernetes APIs.
The schema definition for the custom resources (CRs) can be found here. The CRDs should be installed by the Kubernetes distributions.
There are 3 APIs:
VolumeGroupSnapshot
: Created by a Kubernetes user (or perhaps by your own automation) to request
creation of a volume group snapshot for multiple volumes.
It contains information about the volume group snapshot operation such as the
timestamp when the volume group snapshot was taken and whether it is ready to use.
The creation and deletion of this object represents a desire to create or delete a
cluster resource (a group snapshot).
VolumeGroupSnapshotContent
: Created by the snapshot controller for a dynamically created VolumeGroupSnapshot.
It contains information about the volume group snapshot including the volume group
snapshot ID.
This object represents a provisioned resource on the cluster (a group snapshot).
The VolumeGroupSnapshotContent object binds to the VolumeGroupSnapshot for which it
was created with a one-to-one mapping.
VolumeGroupSnapshotClass
: Created by cluster administrators to describe how volume group snapshots should be
created. including the driver information, the deletion policy, etc.
Controller
- The controller logic for volume group snapshot is added to the snapshot controller and the CSI external-snapshotter sidecar.
The snapshot controller is deployed by the Kubernetes distributions and is responsible for watching the VolumeGroupSnapshot CRD objects and manges the creation and deletion lifecycle of volume group snapshots.
The CSI external-snapshotter sidecar watches Kubernetes VolumeGroupSnapshotContent CRD objects and triggers CreateVolumeGroupSnapshot/DeleteVolumeGroupSnapshot against a CSI endpoint.
Snapshot Validation Webhook
The validating webhook server is updated to validate volume group snapshot objects. This SHOULD be installed by the Kubernetes distros along with the snapshot-controller, not end users. It SHOULD be installed in all Kubernetes clusters that has the volume group snapshot feature enabled. See Snapshot Validation Webhook for more details on how to use the webhook.
Kubernetes Cluster Setup
See the Deployment section of Snapshot Controller on how to set up the snapshot controller and CRDs.
See the Deployment section of Snapshot Validation Webhook for more details on how to use the webhook.
Test Volume Group Snapshot Feature
To test volume group snapshot version, use the following example yaml files.
Create a StorageClass:
kubectl create -f storageclass.yaml
Create PVCs:
# This will create a PVC named hpvc
kubectl create -f pvc.yaml
# This will create a PVC named hpvc-2
sed "s/hpvc/hpvc-2/" pvc.yaml | kubectl create -f -
Add a label to PVC:
kubectl label pvc hpvc hpvc-2 app.kubernetes.io/name=postgresql
Create a VolumeGroupSnapshotClass:
kubectl create -f groupsnapshotclass-v1alpha1.yaml
Create a VolumeGroupSnapshot:
kubectl create -f groupsnapshot-v1alpha1.yaml
Once the VolumeGroupSnapshot is ready, the pvcVolumeSnapshotRefList
status field will contain the names of the generated VolumeSnapshot objects:
kubectl get volumegroupsnapshot new-groupsnapshot-demo -o yaml | sed -n '/pvcVolumeSnapshotRefList/,$p'
pvcVolumeSnapshotRefList:
- persistentVolumeClaimRef:
name: hpvc
volumeSnapshotRef:
name: snapshot-4bcc4a322a473abf32babe3df5779d14349542b1f0eb6f9dab0466a85c59cd42-2024-06-19-12.35.17
- persistentVolumeClaimRef:
name: hpvc-2
volumeSnapshotRef:
name: snapshot-62bd0be591e1e10c22d51748cd4a53c0ae8bf52fabb482bee7bc51f8ff9d9589-2024-06-19-12.35.17
readyToUse: true
Create a PVC from a VolumeSnapshot that is part of the group snapshot:
# In the command below, the volume snapshot name should be chosen from
# the ones listed in the output of the previous command
sed 's/new-snapshot-demo-v1/snapshot-4bcc4a322a473abf32babe3df5779d14349542b1f0eb6f9dab0466a85c59cd42-2024-06-19-12.35.17/' restore.yaml | kubectl create -f -
Examples
See the Drivers for a list of CSI drivers that implement the group snapshot feature.
Pod Inline Volume Support
Status
CSI Ephemeral Inline Volumes
Status | Min K8s Version | Max K8s Version |
---|---|---|
Alpha | 1.15 | 1.15 |
Beta | 1.16 | 1.24 |
GA | 1.25 |
Generic Ephemeral Inline Volumes
Status | Min K8s Version | Max K8s Version |
---|---|---|
Alpha | 1.19 | 1.20 |
Beta | 1.21 | 1.22 |
GA | 1.23 |
Overview
Traditionally, volumes that are backed by CSI drivers can only be used
with a PersistentVolume
and PersistentVolumeClaim
object
combination. Two different Kubernetes features allow volumes to follow
the Pod's lifecycle: CSI ephemeral volumes and generic ephemeral
volumes.
In both features, the volumes are specified directly in the pod specification for ephemeral use cases. At runtime, nested inline volumes follow the ephemeral lifecycle of their associated pods where Kubernetes and the driver handle all phases of volume operations as pods are created and destroyed.
However, the two features are targeted at different use cases and thus have different APIs and different implementations.
See the CSI inline volumes and generic ephemeral volumes enhancement proposals for design details. The user facing documentation for both features is in the Kubernetes documentation.
Which feature should my driver support?
CSI ephemeral inline volumes are meant for simple, local volumes. All parameters that determine the content of the volume can be specified in the pod spec, and only there. Storage classes are not supported and all parameters are driver specific.
apiVersion: v1
kind: Pod
metadata:
name: some-pod
spec:
containers:
...
volumes:
- name: vol
csi:
driver: inline.storage.kubernetes.io
volumeAttributes:
foo: bar
A CSI driver is suitable for CSI ephemeral inline volumes if:
- it serves a special purpose and needs custom per-volume parameters, like drivers that provide secrets to a pod
- it can create volumes when running on a node
- fast volume creation is needed
- resource usage on the node is small and/or does not need to be exposed to Kubernetes
- rescheduling of pods onto a different node when storage capacity turns out to be insufficient is not needed
- none of the usual volume features (restoring from snapshot, cloning volumes, etc.) are needed
- ephemeral inline volumes have to be supported on Kubernetes clusters which do not support generic ephemeral volumes
A CSI driver is not suitable for CSI ephemeral inline volumes when:
- provisioning is not local to the node
- ephemeral volume creation requires volumeAttributes that should be restricted to an administrator, for example parameters that are otherwise set in a StorageClass or PV. Ephemeral inline volumes allow these attributes to be set directly in the Pod spec, and so are not restricted to an admin.
Generic ephemeral inline volumes make the normal volume API (storage
classes, PersistentVolumeClaim
) usable for ephemeral inline
volumes.
kind: Pod
apiVersion: v1
metadata:
name: some-pod
spec:
containers:
...
volumes:
- name: scratch-volume
ephemeral:
volumeClaimTemplate:
metadata:
labels:
type: my-frontend-volume
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "scratch-storage-class"
resources:
requests:
storage: 1Gi
A CSI driver is suitable for generic ephemeral inline volumes if it supports dynamic provisioning of volumes. No other changes are needed in the driver in that case. Such a driver can also support CSI ephemeral inline volumes if desired.
Security Considerations
CSI driver vendors that choose to support ephemeral inline volumes are responsible for secure handling of these volumes, and special consideration needs to be given to what volumeAttributes are supported by the driver. As noted above, a CSI driver is not suitable for CSI ephemeral inline volumes when volume creation requires volumeAttributes that should be restricted to an administrator. These attributes are set directly in the Pod spec, and therefore are not automatically restricted to an administrator when used as an inline volume.
CSI inline volumes are only intended to be used for ephemeral storage, and driver vendors should NOT allow usage of inline volumes for persistent storage unless they also provide a third party pod admission plugin to restrict usage of these volumes.
Cluster administrators who need to restrict the CSI drivers that are allowed to be used as inline volumes within a Pod spec may do so by:
- Removing
Ephemeral
fromvolumeLifecycleModes
in the CSIDriver spec, which prevents the driver from being used as an inline ephemeral volume. - Using an admission webhook to restrict how this driver is used.
Implementing CSI ephemeral inline support
Drivers must be modified (or implemented specifically) to support CSI inline
ephemeral workflows. When Kubernetes encounters an inline CSI volume embedded
in a pod spec, it treats that volume differently. Mainly, the driver will only
receive NodePublishVolume
, during the volume's mount phase, and NodeUnpublishVolume
when
the pod is going away and the volume is unmounted.
Due to these requirements, ephemeral volumes will not be created using the Controller
Service,
but the Node
Service,
instead. When the
kubelet
calls NodePublishVolume, it is the responsibility of the CSI driver to create the
volume during that call, then publish the volume to the specified location. When
the kubelet
calls NodeUnpublishVolume, it is the responsibility of the CSI
driver to delete the volume.
To support inline, a driver must implement the followings:
- Identity service
- Node service
CSI Extension Specification
NodePublishVolume
Arguments
volume_id
: Volume ID will be created by the Kubernetes and passed to the driver by the kubelet.volume_context["csi.storage.k8s.io/ephemeral"]
: This value will be available and it will be equal to"true"
.
Workflow
The driver will receive the appropriate arguments as defined above when an
ephemeral volume is requested. The driver will create and publish the volume
to the specified location as noted in the NodePublishVolume request. Volume
size and any other parameters required will be passed in verbatim from the
inline manifest parameters to the NodePublishVolumeRequest.volume_context
.
There is no guarantee that NodePublishVolume will be called again after a failure, regardless of what the failure is. To avoid leaking resources, a CSI driver must either always free all resources before returning from NodePublishVolume on error or implement some kind of garbage collection.
NodeUnpublishVolume
Arguments
No changes
Workflow
The driver is responsible of deleting the ephemeral volume once it has unpublished the volume. It MAY delete the volume before finishing the request, or after the request to unpublish is returned.
Read-Only Volumes
It is possible for a CSI driver to provide volumes to Pods as read-only while allowing them to be writeable on the node for kubelet, the driver, and the container runtime. This allows the CSI driver to dynamically update contents of the volume without exposing issues like CVE-2017-1002102, since the volume is read-only for the end user. It also allows the fsGroup
and SELinux context of files to be applied on the node so the Pod gets the volume with the expected permissions and SELinux label.
To benefit from this behavior, the following can be implemented in the CSI driver:
- The driver provides an admission plugin that sets
ReadOnly: true
to all volumeMounts of such volumes. We can't trust that this will be done by every user on every pod. - The driver checks that the
readonly
flag is set in all NodePublish requests. We can't trust that the admission plugin above is deployed on every cluster. - When both conditions above are satisfied, the driver MAY ignore the
readonly
flag in NodePublish and set up the volume as read-write. Ignoring thereadonly
flag in NodePublish is considered valid CSI driver behavior for inline ephemeral volumes.
The presence of ReadOnly: true
in the Pod spec tells kubelet to bind-mount the volume to the container as read-only, while the underlying mount is read-write on the host. This is the same behavior used for projected volumes like Secrets and ConfigMaps.
CSIDriver
Kubernetes only allows using a CSI driver for an inline volume if
its CSIDriver
object explicitly declares
that the driver supports that kind of usage in its
volumeLifecycleModes
field. This is a safeguard against accidentally
using a driver the wrong way.
References
- CSI Host Path driver ephemeral volumes support
- Issue 82507: Drop VolumeLifecycleModes field from CSIDriver API before GA
- Issue 75222: CSI Inline - Update CSIDriver to indicate driver mode
- CSIDriver support for ephemeral volumes
- CSI Hostpath driver - an example driver that supports both modes and determines the mode on a case-by-case basis (for Kubernetes 1.16) or can be deployed with support for just one of the two modes (for Kubernetes 1.15).
- Image populator plugin - an example CSI driver plugin that uses a container image as a volume.
Volume Limits
Status
Status | Min K8s Version | Max K8s Version |
---|---|---|
Alpha | 1.11 | 1.11 |
Beta | 1.12 | 1.16 |
GA | 1.17 | - |
Overview
Some storage providers may have a restriction on the number of volumes that can be used in a Node. This is common in cloud providers, but other providers might impose restriction as well.
Kubernetes will respect this limit as long the CSI driver advertises it. To support volume limits in a CSI driver, the plugin must fill in max_volumes_per_node
in NodeGetInfoResponse
.
It is recommended that CSI drivers allow for customization of volume limits. That way cluster administrators can distribute the limits of the same storage backends (e.g. iSCSI) accross different drivers, according to their individual needs.
Storage Capacity Tracking
Status
Status | Min K8s Version | Max K8s Version |
---|---|---|
Alpha | 1.19 | - |
Overview
Storage capacity tracking allows the Kubernetes scheduler to make more informed choices about where to start pods which depend on unbound volumes with late binding (aka "wait for first consumer"). Without storage capacity tracking, a node is chosen without knowing whether those volumes can be made available for the node. Volume creation is attempted and if that fails, the pod has to be rescheduled, potentially landing on the same node again. With storage capacity tracking, the scheduler filters out nodes which do not have enough capacity.
For design information, see the enhancement proposal.
Usage
To support rescheduling of a pod, a CSI driver deployment must:
- return the
ResourceExhausted
gRPC status code inCreateVolume
if capacity is exhausted - use external-provisioner >= 1.6.0 because older releases did not
properly support rescheduling after a
ResourceExhausted
error
To support storage capacity tracking, a CSI driver deployment must:
- implement the
GetCapacity
call - use external-provisioner >= 2.0.0
- enable producing of storage capacity objects as explained in the external-provisioner documentation
- enable usage of that information by setting the
CSIDriverSpec.StorageCapacity
field toTrue
- run on a cluster where the storage capacity API is enabled
Further information can be found in the Kubernetes documentation.
Volume Health Monitoring Feature
Status
Status | Min K8s Version | Max K8s Version | external-health-monitor-controller Version |
---|---|---|---|
Alpha | 1.21 | - | 0.8.0 |
Overview
The External Health Monitor is part of Kubernetes implementation of Container Storage Interface (CSI). It was introduced as an Alpha feature in Kubernetes v1.19. In Kubernetes 1.21, a second Alpha was done due to a design change which deprecated External Health Monitor Agent.
The External Health Monitor is implemented as two components: External Health Monitor Controller
and Kubelet
.
-
External Health Monitor Controller:
- The external health monitor controller will be deployed as a sidecar together with the CSI controller driver, similar to how the external-provisioner sidecar is deployed.
- Trigger controller RPC to check the health condition of the CSI volumes.
- The external controller sidecar will also watch for node failure events. This component can be enabled via a flag.
-
Kubelet:
- In addition to existing volume stats collected already, Kubelet will also check volume's mounting conditions collected from the same CSI node RPC and log events to Pods if volume condition is abnormal.
The Volume Health Monitoring feature need to invoke the following CSI interfaces.
- External Health Monitor Controller:
- ListVolumes (If both
ListVolumes
andControllerGetVolume
are supported,ListVolumes
will be used) - ControllerGetVolume
- ListVolumes (If both
- Kubelet:
- NodeGetVolumeStats
- This feature in Kubelet is controlled by an Alpha feature gate
CSIVolumeHealth
.
See external-health-monitor-controller.md for more details on the CSI external-health-monitor-controller
sidecar.
Token Requests
Status
Status | Min K8s Version | Max K8s Version |
---|---|---|
Alpha | 1.20 | 1.20 |
Beta | 1.21 | 1.21 |
GA | 1.22 | - |
Overview
This feature allows CSI drivers to impersonate the pods that they mount the volumes for. This improves the security posture in the mounting process where the volumes are ACL’ed on the pods’ service account without handing out unnecessary permissions to the CSI drivers’ service account. This feature is especially important for secret-handling CSI drivers, such as the secrets-store-csi-driver. Since these tokens can be rotated and short-lived, this feature also provides a knob for CSI drivers to receive NodePublishVolume RPC calls periodically with the new token. This knob is also useful when volumes are short-lived, e.g. certificates.
See more details at the design document.
Usage
This feature adds two fields in CSIDriver
spec:
type CSIDriverSpec struct {
... // existing fields
RequiresRepublish *bool
TokenRequests []TokenRequest
}
type TokenRequest struct {
Audience string
ExpirationSeconds *int64
}
-
TokenRequest.Audience
:- This is a required field.
- Audiences should be distinct, otherwise the validation will fail.
- If it is empty string, the audience of the token is the
APIAudiences
of kube-apiserver. one of the audiences specified. - See more about audience specification here
-
TokenRequest.ExpirationSeconds
:- The field is optional.
- It has to be at least 10 minutes (600 seconds) and no more than
1 << 32
seconds.
-
RequiresRepublish
:- This field is optional.
- If this is true,
NodePublishVolume
will be periodically called. When used withTokenRequest
, the token will be refreshed if it expired.NodePublishVolume
should only change the contents rather than the mount because container will not be restarted to reflect the mount change. The period betweenNodePublishVolume
is 0.1s.
The token will be bounded to the pod that the CSI driver is mounting volumes for
and will be set in VolumeContext
:
"csi.storage.k8s.io/serviceAccount.tokens": {
<audience>: {
'token': <token>,
'expirationTimestamp': <expiration timestamp in RFC3339 format>,
},
...
}
If CSI driver doesn't find token recorded in the volume_context
, it should return error in NodePublishVolume
to inform Kubelet to retry.
Example
Here is an example of a CSIDriver
object:
apiVersion: storage.k8s.io/v1
kind: CSIDriver
metadata:
name: mycsidriver.example.com
spec:
tokenRequests:
- audience: "gcp"
- audience: ""
expirationSeconds: 3600
requiresRepublish: true
Feature gate
Kube apiserver must start with the CSIServiceAccountToken
feature gate enabled:
--feature-gates=CSIServiceAccountToken=true
It is enabled by default in Kubernetes 1.21 and cannot be disabled since 1.22.
Example CSI Drivers
- secrets-store-csi-driver
- With GCP, the driver will pass the token to GCP provider to exchange for GCP credentials, and then request secrets from Secret Manager.
- With Vault,
the Vault provider will send the token to Vault which will use the token in
TokenReview
request to authenticate. - With Azure, the driver will pass the token to Azure provider to exchange for Azure credentials, and then request secrets from Key Vault.
CSI Driver fsGroup Support
There are two features related to supporting fsGroup
for the CSI driver: CSI volume fsGroup policy and delegating fsGroup to CSI driver. For more information about using fsGroup
in Kubernetes, please refer to the Kubernetes documentation on Pod security context.
CSI Volume fsGroup Policy
Status
Status | Min K8s Version | Max K8s Version |
---|---|---|
Alpha | 1.19 | 1.19 |
Beta | 1.20 | 1.22 |
GA | 1.23 | - |
Overview
CSI Drivers can indicate whether or not they support modifying a volume's ownership or permissions when the volume is being mounted. This can be useful if the CSI Driver does not support the operation, or wishes to re-use volumes with constantly changing permissions.
See the design document for further information.
Example Usage
When creating the CSI Driver object, fsGroupPolicy
is defined in the driver's spec. The following shows the hostpath driver with None
included, indicating that the volumes should not be modified when mounted:
apiVersion: storage.k8s.io/v1
kind: CSIDriver
metadata:
name: hostpath.csi.k8s.io
spec:
# Supports persistent and ephemeral inline volumes.
volumeLifecycleModes:
- Persistent
- Ephemeral
# To determine at runtime which mode a volume uses, pod info and its
# "csi.storage.k8s.io/ephemeral" entry are needed.
podInfoOnMount: true
fsGroupPolicy: None
Supported Modes
- The following modes are supported:
None
: Indicates that volumes will be mounted with no modifications, as the CSI volume driver does not support these operations.File
: Indicates that the CSI volume driver supports volume ownership and permission change via fsGroup, and Kubernetes may use fsGroup to change permissions and ownership of the volume to match user requested fsGroup in the pod's SecurityPolicy regardless of fstype or access mode.ReadWriteOnceWithFSType
: Indicates that volumes will be examined to determine if volume ownership and permissions should be modified to match the pod's security policy. Changes will only occur if thefsType
is defined and the persistent volume'saccessModes
containsReadWriteOnce
. .
If undefined, fsGroupPolicy
will default to ReadWriteOnceWithFSType
, keeping the previous behavior.
Feature Gates
To use this field, Kubernetes 1.19 binaries must start with the CSIVolumeFSGroupPolicy
feature gate enabled:
--feature-gates=CSIVolumeFSGroupPolicy=true
This is enabled by default on 1.20 and higher.
Delegate fsGroup to CSI Driver
Status
Status | Min K8s Version | Max K8s Version |
---|---|---|
Alpha | 1.22 | 1.22 |
Beta | 1.23 | - |
GA | 1.26 | - |
Overview
For most drivers, kubelet applies the fsGroup
specified in a Pod spec by recursively changing volume ownership during the mount process. This does not work for certain drivers. For example:
- A driver requires passing
fsGroup
to mount options in order for it to take effect. - A driver needs to apply
fsGroup
at the stage step (NodeStageVolume
in CSI;MountDevice
in Kubernetes) instead of the mount step (NodePublishVolume
in CSI;SetUp/SetUpAt
in Kubernetes).
This feature provides a mechanism for the driver to apply fsGroup
instead of kubelet. Specifically, it passes fsGroup
to the CSI driver through NodeStageVolume
and NodePublishVolume
calls, and the kubelet fsGroup
logic is disabled. The driver is expected to apply the fsGroup
within one of these calls.
If this feature is enabled in Kubernetes and a volume uses a driver that supports this feature, CSIDriver.spec.fsGroupPolicy
and Pod.spec.securityContext.fsGroupChangePolicy
are ignored.
See the design document and the description of the
VolumeCapability.MountVolume.volume_mount_group
field in the CSI spec for further information.
Usage
The CSI driver must implement the VOLUME_MOUNT_GROUP
node service capability. The Pod-specified fsGroup
will be available in NodeStageVolumeRequest
and NodePublishVolumeRequest
via VolumeCapability.MountVolume.VolumeMountGroup
.
Feature Gates
To use this field, Kubernetes 1.22 binaries must start with the DelegateFSGroupToCSIDriver
feature gate enabled:
--feature-gates=DelegateFSGroupToCSIDriver=true
This is enabled by default on 1.23 and higher.
CSI Windows Support
Status
Status | Min K8s Version | Min CSI proxy Version | Min Node Driver Registrar Version |
---|---|---|---|
GA | 1.19 | 1.0.0 | 1.3.0 |
Beta | 1.19 | 0.2.0 | 1.3.0 |
Alpha | 1.18 | 0.1.0 | 1.3.0 |
Overview
CSI drivers (e.g. AzureDisk, GCE PD, etc.) are recommended to be deployed as containers. CSI driver’s node plugin typically runs on every worker node in the cluster (as a DaemonSet). Node plugin containers need to run with elevated privileges to perform storage related operations. However, Windows was not supporting privileged containers (Note: privileged containers a.k.a Host process is introduced as alpha feature in Kubernetes 1.22 very recently). To solve this problem, CSI Proxy is a binary that runs on the Windows host and executes a set of privileged storage operations on Windows nodes on behalf of containers in a CSI Node plugin daemonset. This enables multiple CSI Node plugins to execute privileged storage operations on Windows nodes without having to ship a custom privileged operation proxy.
Please note that CSI controller level operations/sidecars are not supported on Windows.
How to use the CSI Proxy for Windows?
See how to install CSI Proxy in the Deployment chapter.
For CSI driver authors, import CSI proxy client under github.com/kubernetes-csi/csi-proxy/client. There are six client API groups including disk, filesystem, iscsi, smb, system, volume. See link for details. As an example, please check how GCE PD Driver import disk, volume and filesystem client API groups here
The Daemonset specification of a CSI node plugin for Windows can mount the desired named pipes from CSI Proxy based on the version of the API groups that the node-plugin needs to execute.
The following Daemonset YAML shows how to mount various API groups from CSI Proxy into a CSI Node plugin:
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: csi-storage-node-win
spec:
selector:
matchLabels:
app: csi-driver-win
template:
metadata:
labels:
app: csi-driver-win
spec:
serviceAccountName: csi-node-sa
nodeSelector:
kubernetes.io/os: windows
containers:
- name: csi-driver-registrar
image: registry.k8s.io/sig-storage/csi-node-driver-registrar
args:
- "--v=5"
- "--csi-address=unix://C:\\csi\\csi.sock"
- "--kubelet-registration-path=C:\\kubelet\\plugins\\plugin.csi\\csi.sock"
volumeMounts:
- name: plugin-dir
mountPath: C:\csi
- name: registration-dir
mountPath: C:\registration
- name: csi-driver
image: registry.k8s.io/sig-storage/csi-driver:win-v1
args:
- "--v=5"
- "--endpoint=unix:/csi/csi.sock"
volumeMounts:
- name: kubelet-dir
mountPath: C:\var\lib\kubelet
- name: plugin-dir
mountPath: C:\csi
- name: csi-proxy-disk-pipe
mountPath: \\.\pipe\csi-proxy-disk-v1
- name: csi-proxy-volume-pipe
mountPath: \\.\pipe\csi-proxy-volume-v1
- name: csi-proxy-filesystem-pipe
mountPath: \\.\pipe\csi-proxy-filesystem-v1
volumes:
- name: csi-proxy-disk-pipe
hostPath:
path: \\.\pipe\csi-proxy-disk-v1
type: ""
- name: csi-proxy-volume-pipe
hostPath:
path: \\.\pipe\csi-proxy-volume-v1
type: ""
- name: csi-proxy-filesystem-pipe
hostPath:
path: \\.\pipe\csi-proxy-filesystem-v1
type: ""
- name: registration-dir
hostPath:
path: C:\var\lib\kubelet\plugins_registry\
type: Directory
- name: kubelet-dir
hostPath:
path: C:\var\lib\kubelet\
type: Directory
- name: plugin-dir
hostPath:
path: C:\var\lib\kubelet\plugins\csi.org.io\
type: DirectoryOrCreate
Prevent unauthorised volume mode conversion
Status
Status | Min K8s Version | Max K8s Version | external-snapshotter Version | external-provisioner Version |
---|---|---|---|---|
Alpha | 1.24 | - | 6.0.1+ | 3.2.1+ |
Beta | 1.28 | - | 7.0.0+ | 4.0.0+ |
GA | 1.30 | - | 8.0.1+ | 5.0.1+ |
Overview
Malicious users can populate the spec.volumeMode
field of a PersistentVolumeClaim
with a Volume Mode
that differs from the original volume's mode to potentially exploit an as-yet-unknown
vulnerability in the host operating system.
This feature allows cluster administrators to prevent unauthorized users from converting
the mode of a volume when a PersistentVolumeClaim
is being created from an existing
VolumeSnapshot
instance.
See the Kubernetes Enhancement Proposal for more details on the background, design and discussions.
Usage
This feature is enabled by default and moved to GA with the Kubernetes 1.30 release. To use this feature, cluster administrators must:
- Create
VolumeSnapshot
APIs with a minimum version ofv8.0.1
. - Use
snapshot-controller
andsnapshot-validation-webhook
with a minimum version ofv8.0.1
. - Use
external-provisioner
with a minimum version ofv5.0.1
.
For more information about how to use the feature, visit the Kubernetes blog page.
Cross-namespace storage data sources
Status
Status | Min K8s Version | Max K8s Version | external-provisioner Version |
---|---|---|---|
Alpha | 1.26 | - | 3.4.0+ |
Overview
By default, a VolumeSnapshot
is a namespace-scoped resource while a VolumeSnapshotContent
is a cluster-scope resource.
Consequently, you can not restore a snapshot from a different namespace than the source.
With that feature enabled, you can specify a namespace
attribute in the dataSourceRef
. Once Kubernetes checks that access is OK, the new PersistentVolume can populate its data from the storage source specified in another
namespace.
See the Kubernetes Enhancement Proposal for more details on the background, design and discussions.
Usage
To enable this feature, cluster administrators must:
- Install a CRD for
ReferenceGrants
supplied by the gateway API project - Enable the
AnyVolumeDataSource
andCrossNamespaceVolumeDataSource
feature gates for the kube-apiserver and kube-controller-manager - Install a CRD for the specific
VolumeSnapShot
controller - Start the CSI Provisioner controller with the argument
--feature-gates=CrossNamespaceVolumeDataSource=true
- Grant the CSI Provisioner with get, list, and watch permissions for
referencegrants
(API groupgateway.networking.k8s.io
) - Install the CSI driver
For more information about how to use the feature, visit the Kubernetes blog page.
Deploying CSI Driver on Kubernetes
This page describes to CSI driver developers how to deploy their driver onto a Kubernetes cluster.
Overview
A CSI driver is typically deployed in Kubernetes as two components: a controller component and a per-node component.
Controller Plugin
The controller component can be deployed as a Deployment or StatefulSet on any node in the cluster. It consists of the CSI driver that implements the CSI Controller service and one or more sidecar containers. These controller sidecar containers typically interact with Kubernetes objects and make calls to the driver's CSI Controller service.
It generally does not need direct access to the host and can perform all its operations through the Kubernetes API and external control plane services. Multiple copies of the controller component can be deployed for HA, however it is recommended to use leader election to ensure there is only one active controller at a time.
Controller sidecars include the external-provisioner, external-attacher, external-snapshotter, and external-resizer. Including a sidecar in the deployment may be optional. See each sidecar's page for more details.
Communication with Sidecars
Sidecar containers manage Kubernetes events and make the appropriate calls to the CSI driver. The calls are made by sharing a UNIX domain socket through an emptyDir volume between the sidecars and CSI Driver.
RBAC Rules
Most controller sidecars interact with Kubernetes objects and therefore need to set RBAC policies. Each sidecar repository contains example RBAC configurations.
Node Plugin
The node component should be deployed on every node in the cluster through a DaemonSet. It consists of the CSI driver that implements the CSI Node service and the node-driver-registrar sidecar container.
Communication with Kubelet
The Kubernetes kubelet runs on every node and is responsible for making the CSI Node service calls. These calls mount and unmount the storage volume from the storage system, making it available to the Pod to consume. Kubelet makes calls to the CSI driver through a UNIX domain socket shared on the host via a HostPath volume. There is also a second UNIX domain socket that the node-driver-registrar uses to register the CSI driver to kubelet.
Driver Volume Mounts
The node plugin needs direct access to the host for making block devices and/or filesystem mounts available to the Kubernetes kubelet.
The mount point used by the CSI driver must be set to Bidirectional to allow Kubelet on the host to see mounts created by the CSI driver container. See the example below:
containers:
- name: my-csi-driver
...
volumeMounts:
- name: socket-dir
mountPath: /csi
- name: mountpoint-dir
mountPath: /var/lib/kubelet/pods
mountPropagation: "Bidirectional"
- name: node-driver-registrar
...
volumeMounts:
- name: registration-dir
mountPath: /registration
volumes:
# This volume is where the socket for kubelet->driver communication is done
- name: socket-dir
hostPath:
path: /var/lib/kubelet/plugins/<driver-name>
type: DirectoryOrCreate
# This volume is where the driver mounts volumes
- name: mountpoint-dir
hostPath:
path: /var/lib/kubelet/pods
type: Directory
# This volume is where the node-driver-registrar registers the plugin
# with kubelet
- name: registration-dir
hostPath:
path: /var/lib/kubelet/plugins_registry
type: Directory
Deploying
Deploying a CSI driver onto Kubernetes is highlighted in detail in Recommended Mechanism for Deploying CSI Drivers on Kubernetes.
Enable privileged Pods
To use CSI drivers, your Kubernetes cluster must allow privileged pods (i.e. --allow-privileged
flag must be set to true
for both the API server and the kubelet). This is the default in some environments (e.g. GCE, GKE, kubeadm
).
Ensure your API server are started with the privileged flag:
$ ./kube-apiserver ... --allow-privileged=true ...
$ ./kubelet ... --allow-privileged=true ...
Note: Starting from Kubernetes 1.13.0, --allow-privileged is true for kubelet. It'll be deprecated in future kubernetes releases.
Enabling mount propagation
Another feature that CSI depends on is mount propagation. It allows the sharing of volumes mounted by one container with other containers in the same pod, or even to other pods on the same node. For mount propagation to work, the Docker daemon for the cluster must allow shared mounts. See the mount propagation docs to find out how to enable this feature for your cluster. This page explains how to check if shared mounts are enabled and how to configure Docker for shared mounts.
Examples
- Simple deployment example using a single pod for all components: see the hostpath example.
- Full deployment example using a DaemonSet for the node plugin and StatefulSet for the controller plugin: TODO
More information
For more information, please read CSI Volume Plugins in Kubernetes Design Doc.
Example
The Hostpath CSI driver is a simple sample driver that provisions a directory on the host. It can be used as an example to get started writing a driver, however it is not meant for production use. The deployment example shows how to deploy and use that driver in Kubernetes.
The example deployment uses the original RBAC rule files that are maintained together with sidecar apps and deploys into the default namespace. A real production should copy the RBAC files and customize them as explained in the comments of those files.
If you encounter any problems, please check the Troubleshooting page.
Testing
This section describes how CSI developers can test their CSI drivers.
Unit Testing
The CSI sanity
package from csi-test can be used for unit testing your CSI driver.
It contains a set of basic tests that all CSI drivers should pass (for example, NodePublishVolume should fail when no volume id is provided
, etc.).
This package can be used in two modes:
- Via a Golang test framework (
sanity
package is imported as a dependency) - Via a command line against your driver binary.
Read the documentation of the sanity
package for more details.
Functional Testing
Drivers should be functionally "end-to-end" tested while deployed in a Kubernetes cluster. Previously, how to do this and what tests to run was left up to driver authors. Now, a standard set of Kubernetes CSI end-to-end tests can be imported and run by third party CSI drivers. This documentation specifies how to do so.
The CSI community is also looking in to establishing an official "CSI Conformance Suite" to recognize "officially certified CSI drivers". This documentation will be updated with more information once that process has been defined.
Kubernetes End to End Testing for CSI Storage Plugins
Currently, csi-sanity exists to help test compliance with the CSI spec, but e2e testing of plugins is needed as well to provide plugin authors and users validation that their plugin is integrated well with specific versions of Kubernetes.
Setting up End to End tests for your CSI Plugin
Prerequisites:
- A Kubernetes v1.13+ Cluster
- Kubectl
There are two ways to run end-to-end tests for your CSI Plugin
- use Kubernetes E2E Tests, by providing a DriverDefinition YAML file via a parameter.
- Note: In some cases you would not be able to use this method, in running e2e tests by just providing a YAML file defining your CSI plugin. For example the NFS CSI plugin currently does not support dynamic provisioning, so we would want to skip those and run only pre-provisioned tests. For such cases, you would need to write your own testdriver, which is discussed below.
- import the in-tree storage tests and run them using
go test
.
This doc will cover how to run the E2E tests using the second method.
Importing the E2E test suite as a library
In-tree storage e2e tests could be used to test CSI storage plugins. Your repo should be setup similar to how the NFS CSI plugin is setup, where the testfiles are in a test
directory and the main test file is in the cmd
directory.
To be able to import Kubernetes in-tree storage tests, the CSI plugin would need to use Kubernetes v1.14+ (add to plugin's GoPkg.toml, since pluggable E2E tests become available in v1.14). CSI plugin authors would also be required to implement a testdriver for their CSI plugin. The testdriver provides required functionality that would help setup testcases for a particular plugin.
For any testdriver these functions would be required (Since it implements the TestDriver Interface):
GetDriverInfo() *testsuites.DriverInfo
SkipUnsupportedTest(pattern testpatterns.TestPattern)
PrepareTest(f *framework.Framework) (*testsuites.PerTestConfig, func())
The PrepareTest
method is where you would write code to setup your CSI plugin, and it would be called before each test case. It is recommended that you don't deploy your plugin in this method, and rather deploy it manually before running your tests.
GetDriverInfo
will return a DriverInfo
object that has all of the plugin's capabilities and required information. This object helps tests find the deployed plugin, and also decides which tests should run (depending on the plugin's capabilities).
Here are examples of the NFS and Hostpath DriverInfo objects:
testsuites.DriverInfo{
Name: "csi-nfsplugin",
MaxFileSize: testpatterns.FileSizeLarge,
SupportedFsType: sets.NewString(
"", // Default fsType
),
Capabilities: map[testsuites.Capability]bool{
testsuites.CapPersistence: true,
testsuites.CapExec: true,
},
}
testsuites.DriverInfo{
Name: "csi-hostpath",
FeatureTag: "",
MaxFileSize: testpatterns.FileSizeMedium,
SupportedFsType: sets.NewString(
"", // Default fsType
),
Capabilities: map[testsuites.Capability]bool{
testsuites.CapPersistence: true,
},
}
You would define something similar for your CSI plugin.
SkipUnsupportedTest
simply skips any tests that you define there.
Depending on your plugin's specs, you would implement other interfaces defined here. For example the NFS testdriver also implements PreprovisionedVolumeTestDriver and PreprovisionedPVTestDriver interfaces, to enable pre-provisioned tests.
After implementing the testdriver for your CSI plugin, you would create a csi-volumes.go
file, where the implemented testdriver is used to run in-tree storage testsuites, similar to how the NFS CSI plugin does so. This is where you would define which testsuites you would want to run for your plugin. All available in-tree testsuites can be found here.
Finally, importing the test
package into your main test file will initialize the testsuites to run the E2E tests.
The NFS plugin creates a binary to run E2E tests, but you could use go test
instead to run E2E tests using a command like this:
go test -v <main test file> -ginkgo.v -ginkgo.progress --kubeconfig=<kubeconfig file> -timeout=0
Drivers
The following are a set of CSI driver which can be used with Kubernetes:
NOTE: If you would like your driver to be added to this table, please open a pull request in this repo updating this file. Other Features is allowed to be filled in Raw Block, Snapshot, Expansion, Cloning and Topology. If driver did not implement any Other Features, please leave it blank.
DISCLAIMER: Information in this table has not been validated by Kubernetes SIG-Storage. Users who want to use these CSI drivers need to contact driver maintainers for driver capabilities.
Production Drivers
Name | CSI Driver Name | Compatible with CSI Version(s) | Description | Persistence (Beyond Pod Lifetime) | Supported Access Modes | Dynamic Provisioning | Other Features |
---|---|---|---|---|---|---|---|
Alibaba Cloud Disk | diskplugin.csi.alibabacloud.com | v1.0, v1.1 | A Container Storage Interface (CSI) Driver for Alibaba Cloud Disk | Persistent | Read/Write Single Pod | Yes | Raw Block, Snapshot, Expansion, Topology |
Alibaba Cloud NAS | nasplugin.csi.alibabacloud.com | v1.0, v1.1 | A Container Storage Interface (CSI) Driver for Alibaba Cloud Network Attached Storage (NAS) | Persistent | Read/Write Multiple Pods | No | |
Alibaba Cloud OSS | ossplugin.csi.alibabacloud.com | v1.0, v1.1 | A Container Storage Interface (CSI) Driver for Alibaba Cloud Object Storage Service (OSS) | Persistent | Read/Write Multiple Pods | No | |
Alluxio | csi.alluxio.com | v1.0 | A Container Storage Interface (CSI) Driver for Alluxio File System) | Persistent | Read/Write Multiple Pods | Yes | |
ArStor CSI | arstor.csi.huayun.io | v1.0 | A Container Storage Interface (CSI) Driver for Huayun Storage Service (ArStor) | Persistent and Ephemeral | Read/Write Single Pod | Yes | Raw Block, Snapshot, Expansion, Cloning |
AWS Elastic Block Storage | ebs.csi.aws.com | v0.3, v1.0 | A Container Storage Interface (CSI) Driver for AWS Elastic Block Storage (EBS) | Persistent | Read/Write Single Pod | Yes | Raw Block, Snapshot, Expansion |
AWS Elastic File System | efs.csi.aws.com | v0.3, v1.0 | A Container Storage Interface (CSI) Driver for AWS Elastic File System (EFS) | Persistent | Read/Write Multiple Pods | Yes | |
AWS FSx for Lustre | fsx.csi.aws.com | v0.3, v1.0 | A Container Storage Interface (CSI) Driver for AWS FSx for Lustre (EBS) | Persistent | Read/Write Multiple Pods | Yes | |
Azure Blob | blob.csi.azure.com | v1.0 | A Container Storage Interface (CSI) Driver for Azure Blob storage | Persistent | Read/Write Multiple Pods | Yes | Expansion |
Azure Disk | disk.csi.azure.com | v1.0 | A Container Storage Interface (CSI) Driver for Azure Disk | Persistent | Read/Write Single Pod | Yes | Raw Block, Snapshot, Expansion, Cloning, Topology |
Azure File | file.csi.azure.com | v1.0 | A Container Storage Interface (CSI) Driver for Azure File | Persistent | Read/Write Multiple Pods | Yes | Expansion |
BeeGFS | beegfs.csi.netapp.com | v1.3 | A Container Storage Interface (CSI) Driver for the BeeGFS Parallel File System | Persistent | Read/Write Multiple Pods | Yes | |
Bigtera VirtualStor (block) | csi.block.bigtera.com | v0.3, v1.0.0, v1.1.0 | A Container Storage Interface (CSI) Driver for Bigtera VirtualStor block storage | Persistent | Read/Write Single Pod | Yes | Raw Block, Snapshot, Expansion |
Bigtera VirtualStor (filesystem) | csi.fs.bigtera.com | v0.3, v1.0.0, v1.1.0 | A Container Storage Interface (CSI) Driver for Bigtera VirtualStor filesystem | Persistent | Read/Write Multiple Pods | Yes | Expansion |
BizFlyCloud Block Storage | volume.csi.bizflycloud.vn | v1.2 | A Container Storage Interface (CSI) Driver for BizFly Cloud block storage | Persistent | Read/Write Single Pod | Yes | Raw Block, Snapshot, Expansion |
CeaStor Block Storage | ceastor.csi.com | v1.7.0 | A repository for the NVMe-oF CSI Driver for CeaStor | Persistent and Ephemeral | Read/Write Multiple Pod | Yes | Expansion, Cloning |
CephFS | cephfs.csi.ceph.com | v0.3, >=v1.0.0 | A Container Storage Interface (CSI) Driver for CephFS | Persistent | Read/Write Multiple Pods | Yes | Expansion, Snapshot, Cloning |
Ceph RBD | rbd.csi.ceph.com | v0.3, >=v1.0.0 | A Container Storage Interface (CSI) Driver for Ceph RBD | Persistent | Read/Write Single Pod | Yes | Raw Block, Snapshot, Expansion, Topology, Cloning,In-tree plugin migration |
cert-manager csi-driver | csi.cert-manager.io | v1.9.0 | Uses cert-manager to provision secretless X.509 certificates for pods via ephemeral CSI volumes | Ephemeral | Read/Write Single Pod | Yes | |
cert-manager csi-driver-spiffe | spiffe.csi.cert-manager.io | v1.9.0 | Uses cert-manager to provision X.509 SPIFFE SVIDs for pods via ephemeral CSI volumes | Ephemeral | Read/Write Single Pod | Yes | |
Cisco HyperFlex CSI | HX-CSI | v1.2 | A Container Storage Interface (CSI) Driver for Cisco HyperFlex | Persistent | Read/Write Multiple Pods | Yes | Raw Block, Expansion, Cloning |
CubeFS | csi.cubefs.com | v1.1.0 | A Container Storage Interface (CSI) Driver for CubeFS Storage | Persistent | Read/Write Multiple Pods | Yes | |
Cinder | cinder.csi.openstack.org | v0.3, [v1.0, v1.3.0] | A Container Storage Interface (CSI) Driver for OpenStack Cinder | Persistent and Ephemeral | Depends on the storage backend used | Yes, if storage backend supports it | Raw Block, Snapshot, Expansion, Cloning, Topology |
cloudscale.ch | csi.cloudscale.ch | v1.0 | A Container Storage Interface (CSI) Driver for the cloudscale.ch IaaS platform | Persistent | Read/Write Single Pod | Yes | Snapshot |
CTDI Block Storage | csi.block.ctdi.com | v1.0 to v1.6 | A Container Storage Interface (CSI) Driver for CTDI Distributed Block Storage | Persistent | Read/Write Single Pod | Yes | Raw Block, Snapshot, Expansion, Cloning |
Datatom-InfinityCSI | csi-infiblock-plugin | v0.3, v1.0.0, v1.1.0 | A Container Storage Interface (CSI) Driver for DATATOM Infinity storage | Persistent | Read/Write Single Pod | Yes | Raw Block, Snapshot, Expansion, Topology |
Datatom-InfinityCSI (filesystem) | csi-infifs-plugin | v0.3, v1.0.0, v1.1.0 | A Container Storage Interface (CSI) Driver for DATATOM Infinity filesystem storage | Persistent | Read/Write Multiple Pods | Yes | Expansion |
Datera | dsp.csi.daterainc.io | v1.0 | A Container Storage Interface (CSI) Driver for Datera Data Services Platform (DSP) | Persistent | Read/Write Single Pod | Yes | Snapshot |
DDN EXAScaler | exa.csi.ddn.com | v1.0, v1.1 | A Container Storage Interface (CSI) Driver for DDN EXAScaler filesystems | Persistent | Read/Write Multiple Pods | Yes | Expansion |
Dell EMC PowerMax | csi-powermax.dellemc.com | [v1.0, v1.5] | A Container Storage Interface (CSI) Driver for Dell EMC PowerMax | Persistent | Read/Write Single Pod | Yes | Raw Block, Snapshot, Expansion, Cloning, Topology |
Dell EMC PowerScale | csi-isilon.dellemc.com | [v1.0, v1.5] | A Container Storage Interface (CSI) Driver for Dell EMC PowerScale | Persistent and Ephemeral | Read/Write Multiple Pods | Yes | Snapshot, Expansion, Cloning, Topology |
Dell EMC PowerStore | csi-powerstore.dellemc.com | [v1.0, v1.5] | A Container Storage Interface (CSI) Driver for Dell EMC PowerStore | Persistent and Ephemeral | Read/Write Single Pod | Yes | Raw Block, Snapshot, Expansion, Cloning, Topology |
Dell EMC Unity | csi-unity.dellemc.com | [v1.0, v1.5] | A Container Storage Interface (CSI) Driver for Dell EMC Unity | Persistent and Ephemeral | Read/Write Single Pod | Yes | Raw Block, Snapshot, Expansion, Cloning, Topology |
Dell EMC VxFlexOS | csi-vxflexos.dellemc.com | [v1.0, v1.5] | A Container Storage Interface (CSI) Driver for Dell EMC VxFlexOS | Persistent | Read/Write Single Pod | Yes | Raw Block, Snapshot, Expansion, Cloning, Topology |
democratic-csi | org.democratic-csi.[X] | [v1.0, v1.5] | Generic CSI plugin supporting zfs based solutions (FreeNAS / TrueNAS and ZoL solutions such as Ubuntu), Synology, and more | Persistent and Ephemeral | Read/Write Single Pod (Block Volume) Read/Write Multiple Pods (File Volume) | Yes | Raw Block, Snapshot, Expansion, Cloning |
Diamanti-CSI | dcx.csi.diamanti.com | v1.0 | A Container Storage Interface (CSI) Driver for Diamanti DCX Platform | Persistent | Read/Write Single Pod | Yes | Raw Block, Snapshot, Expansion |
DigitalOcean Block Storage | dobs.csi.digitalocean.com | v0.3, v1.0 | A Container Storage Interface (CSI) Driver for DigitalOcean Block Storage | Persistent | Read/Write Single Pod | Yes | Raw Block, Snapshot, Expansion |
Dothill-CSI | dothill.csi.enix.io | v1.3 | Generic CSI plugin supporting Seagate AssuredSan appliances such as HPE MSA, Dell EMC PowerVault ME4 and others ... | Persistent | Read/Write Single Node | Yes | Snapshot, Expansion |
Ember CSI | [x].ember-csi.io | v0.2, v0.3, v1.0 | Multi-vendor CSI plugin supporting over 80 Drivers to provide block and mount storage to Container Orchestration systems. | Persistent | Read/Write Single Pod | Yes | Raw Block, Snapshot |
Excelero NVMesh | nvmesh-csi.excelero.com | v1.0, v1.1 | A Container Storage Interface (CSI) Driver for Excelero NVMesh | Persistent | Read/Write Multiple Pods | Yes | Raw Block, Expansion |
Exoscale CSI | csi.exoscale.com | v1.8.0 | A Container Storage Interface (CSI) Driver for Exoscale Block Storage | Persistent | Read/Write Single Pod | Yes | Raw Block, Snapshot, Topology |
GCE Persistent Disk | pd.csi.storage.gke.io | v0.3, v1.0 | A Container Storage Interface (CSI) Driver for Google Compute Engine Persistent Disk (GCE PD) | Persistent | Read/Write Single Pod | Yes | Raw Block, Snapshot, Expansion, Topology |
Google Cloud Filestore | filestore.csi.storage.gke.io | v0.3 | A Container Storage Interface (CSI) Driver for Google Cloud Filestore | Persistent | Read/Write Multiple Pods | Yes | |
Google Cloud Storage FUSE | gcsfuse.csi.storage.gke.io | v1.x | A Container Storage Interface (CSI) Driver for Google Cloud Storage FUSE | Persistent and Ephemeral | Read/Write Multiple Pods | No | |
Google Cloud Storage | gcs.csi.ofek.dev | v1.0 | A Container Storage Interface (CSI) Driver for Google Cloud Storage | Persistent and Ephemeral | Read/Write Multiple Pods | Yes | Expansion |
GlusterFS | org.gluster.glusterfs | v0.3, v1.0 | A Container Storage Interface (CSI) Driver for GlusterFS | Persistent | Read/Write Multiple Pods | Yes | Snapshot |
Gluster VirtBlock | org.gluster.glustervirtblock | v0.3, v1.0 | A Container Storage Interface (CSI) Driver for Gluster Virtual Block volumes | Persistent | Read/Write Single Pod | Yes | |
Hammerspace CSI | com.hammerspace.csi | v0.3, v1.0 | A Container Storage Interface (CSI) Driver for Hammerspace Storage | Persistent | Read/Write Multiple Pods | Yes | Raw Block, Snapshot |
Hedvig | io.hedvig.csi | v1.0 | A Container Storage Interface (CSI) Driver for Hedvig | Persistent | Read/Write Multiple Pods | Yes | Raw Block, Snapshot, Expansion |
Hetzner Cloud Volumes CSI | csi.hetzner.cloud | v0.3, v1.0 | A Container Storage Interface (CSI) Driver for Hetzner Cloud Volumes | Persistent | Read/Write Single Pod | Yes | Raw Block, Expansion |
Hitachi Vantara | hspc.csi.hitachi.com | v1.2 | A Container Storage Interface (CSI) Driver for VSP series Storage | Persistent | Read/Write Single Pod | Yes | Raw Block, Snapshot, Expansion, Cloning |
HPE | csi.hpe.com | v1.3 | A multi-platform Container Storage Interface (CSI) driver. Supports HPE Alletra, Nimble Storage, Primera and 3PAR | Persistent and Ephemeral | Read/Write Multiple Pods | Yes | Raw Block, Snapshot, Expansion, Cloning |
HPE ClusterStor Lustre CSI | lustre-csi.hpe.com | v1.5 | A Container Storage Interface (CSI) Driver for HPE Cray ClusterStor Lustre Storage | Persistent | Read/Write Multiple Pods | No | |
HPE Ezmeral (MapR) | com.mapr.csi-kdf | v1.3 | A Container Storage Interface (CSI) Driver for HPE Ezmeral Data Fabric | Persistent | Read/Write Multiple Pods | Yes | Raw Block, Snapshot, Expansion, Cloning |
HPE GreenLake for File Storage CSI Driver | filex.csi.hpe.com | v1.2 | A Container Storage Interface (CSI) Driver for HPE GreenLake for File Storage. | Persistent and Ephemeral | Read/Write Multiple Pods | Yes | Snapshot, Expansion, Cloning |
Huawei Storage CSI | csi.huawei.com | v1.0, v1.1, v1.2 | A Container Storage Interface (CSI) Driver for FusionStorage, OceanStor 100D, OceanStor Pacific, OceanStor Dorado V3, OceanStor Dorado V6, OceanStor V3, OceanStor V5 | Persistent | Read/Write Multiple Pod | Yes | Snapshot, Expansion, Cloning |
HwameiStor | lvm.hwameistor.io disk.hwameistor.io | v1.3 | A Container Storage Interface (CSI) Driver for Local Storage | Persistent | Read/Write Single Pod | Yes | Raw Block, Expansion |
HyperV CSI | eu.zetanova.csi.hyperv | v1.0, v1.1 | A Container Storage Interface (CSI) driver to manage hyperv hosts | Persistent | Read/Write Multiple Pods | Yes | |
IBM Block Storage | block.csi.ibm.com | [v1.0, v1.5] | A Container Storage Interface (CSI) Driver for IBM Spectrum Virtualize Family, IBM DS8000 Family 8.x and higher. | Persistent | Read/Write Multiple Pod | Yes | Raw Block, Snapshot, Expansion, Cloning, Topology |
IBM Storage Scale | spectrumscale.csi.ibm.com | v1.5 | A Container Storage Interface (CSI) Driver for the IBM Storage Scale File System | Persistent | Read/Write Multiple Pod | Yes | Snapshot, Expansion, Cloning |
IBM Cloud Block Storage VPC CSI Driver | vpc.block.csi.ibm.io | v1.5 | A Container Storage Interface (CSI) Driver for IBM Cloud Kubernetes Service and Red Hat OpenShift on IBM Cloud | Persistent | Read/Write Single Pod | Yes | Raw Block, Expansion, Snapshot |
Infinidat | infinibox-csi-driver | v1.0, v1.8 | A Container Storage Interface (CSI) Driver for Infinidat InfiniBox | Persistent | Read/Write Multiple Pods | Yes | Raw Block, Snapshot, Expansion, Cloning, Topology |
Inspur InStorage CSI | csi-instorage | [v1.0, v1.6] | A Container Storage Interface (CSI) Driver for inspur AS/HF/CS/CF Series Primary Storage, inspur AS13000 SAN/NAS/Object Series SDS Storage | Persistent | Read/Write Single Pod | Yes | Raw Block, Snapshot, Expansion, Cloning |
Intel PMEM-CSI | pmem-csi.intel.com | v1.0 | A Container Storage Interface (CSI) driver for PMEM from Intel | Persistent and Ephemeral | Read/Write Single Pod | Yes | Raw Block |
Intelliflash Block Storage | intelliflash-csi-block-driver.intelliflash.com | v1.0, v1.1, v1.2 | A Container Storage Interface (CSI) Driver for Intelliflash Block Storage | Persistent | Read/Write Multiple Pods | Yes | Snapshot, Expansion, Cloning, Topology |
Intelliflash File Storage | intelliflash-csi-file-driver.intelliflash.com | v1.0, v1.1, v1.2 | A Container Storage Interface (CSI) Driver for Intelliflash File Storage | Persistent | Read/Write Multiple Pods | Yes | Snapshot, Expansion, Cloning, Topology |
ionir | ionir | v1.2 | A Container Storage Interface (CSI) Driver for ionir Kubernetes-Native Storage | Persistent | Read/Write Single Pod | Yes | Raw Block, Cloning |
JD Cloud Storage Platform Block | jdcsp-block.csi.jdcloud.com | v1.8.0 | A Container Storage Interface (CSI) Driver for JD-CSP Block | Persistent | Read/Write Single Pod | Yes | Raw Block, Snapshot, Expansion |
JD Cloud Storage Platform Filesystem | jdcsp-file.csi.jdcloud.com | v1.8.0 | A Container Storage Interface (CSI) Driver for JD-CSP Filesystem | Persistent | Read/Write Multiple Pods | Yes | Expansion |
JuiceFS | csi.juicefs.com | v0.3, v1.0 | A Container Storage Interface (CSI) Driver for JuiceFS File System | Persistent | Read/Write Multiple Pods | Yes | |
kaDalu | org.kadalu.gluster | v0.3 | A CSI Driver (and operator) for GlusterFS | Persistent | Read/Write Multiple Pods | Yes | |
KaiXiangTech MegaBric | flexblock.csi.kaixiangtech.com | v1.5.0 | A Container Storage Interface (CSI) plugin for KaiXiangTech MegaBric Storage | Persistent | Read/Write Multiple Pods | Yes | Raw Block, Expansion, Cloning |
KumoScale Block Storage | kumoscale.kioxia.com | v1.0 | A Container Storage Interface (CSI) Driver for KumoScale Block Storage | Persistent | Read/Write Single Pod | Yes | Raw Block, Snapshot, Expansion, Topology |
Lightbits Labs | csi.lightbitslabs.com | v1.2, v1.3 | A Container Storage Interface (CSI) Driver for Lightbits Storage | Persistent | Read/Write Single Pod (in volumeMode FileSystem) Read/Write Multiple Pods (in volumeMode Block) | Yes | Raw Block, Snapshot, Expansion, Cloning |
Linode Block Storage | linodebs.csi.linode.com | v1.0 | A Container Storage Interface (CSI) Driver for Linode Block Storage | Persistent | Read/Write Single Pod | Yes | |
LINSTOR | linstor.csi.linbit.com | v1.2 | A Container Storage Interface (CSI) Driver for LINSTOR volumes | Persistent | Read/Write Single Pod | Yes | Raw Block, Snapshot, Expansion, Cloning, Topology |
Longhorn | driver.longhorn.io | v1.5 | A Container Storage Interface (CSI) Driver for Longhorn volumes | Persistent | Read/Write Single Node | Yes | Raw Block |
MacroSAN | csi-macrosan | v1.0 | A Container Storage Interface (CSI) Driver for MacroSAN Block Storage | Persistent | Read/Write Single Pod | Yes | |
Manila | manila.csi.openstack.org | v1.1, v1.2 | A Container Storage Interface (CSI) Driver for OpenStack Shared File System Service (Manila) | Persistent | Read/Write Multiple Pods | Yes | Snapshot, Topology |
MooseFS | com.tuxera.csi.moosefs | v1.0 | A Container Storage Interface (CSI) Driver for MooseFS clusters. | Persistent | Read/Write Multiple Pods | Yes | |
NetApp | csi.trident.netapp.io | [v1.0, v1.8] | A Container Storage Interface (CSI) Driver for NetApp's Trident container storage orchestrator | Persistent | Read/Write Multiple Pods | Yes | Raw Block, Snapshot, Expansion, Cloning, Topology |
NexentaStor File Storage | nexentastor-csi-driver.nexenta.com | v1.0, v1.1, v1.2 | A Container Storage Interface (CSI) Driver for NexentaStor File Storage | Persistent | Read/Write Multiple Pods | Yes | Snapshot, Expansion, Cloning, Topology |
NexentaStor Block Storage | nexentastor-block-csi-driver.nexenta.com | v1.0, v1.1, v1.2 | A Container Storage Interface (CSI) Driver for NexentaStor over iSCSI protocol | Persistent | Read/Write Multiple Pods | Yes | Snapshot, Expansion, Cloning, Topology, Raw block |
NFS | nfs.csi.k8s.io | v1.0 | This driver allows Kubernetes to access NFS server on Linux node. | Persistent | Read/Write Multiple Pods | Yes | |
NGX Storage Block Storage | iscsi.csi.ngxstorage.com | v1.8.0 | A Container Storage Interface (CSI) Driver for NGXStorage over iSCSI protocol | Persistent | Read/Write Single Pod | Yes | Raw Block, Expansion, Snapshot |
Nutanix | csi.nutanix.com | v0.3, v1.0, v1.2 | A Container Storage Interface (CSI) Driver for Nutanix | Persistent | "Read/Write Single Pod" with Nutanix Volumes and "Read/Write Multiple Pods" with Nutanix Files | Yes | Raw Block, Snapshot, Expansion, Cloning |
OpenEBS | cstor.csi.openebs.io | v1.0 | A Container Storage Interface (CSI) Driver for OpenEBS | Persistent | Read/Write Single Pod | Yes | Expansion, Snapshot, Cloning |
Open-E | com.open-e.joviandss.csi | v1.0 | A Container Storage Interface (CSI) Driver for Open-E JovianDSS Storage | Persistent | Read/Write Single Pod | Yes | Snapshot, Cloning |
Open-Local | local.csi.alibaba.com | v1.0 | A Container Storage Interface (CSI) Driver for Local Storage | Persistent | Read/Write Single Pod | Yes | Raw Block, Expansion, Snapshot |
Oracle Cloud Infrastructure(OCI) Block Storage | blockvolume.csi.oraclecloud.com | v1.1 | A Container Storage Interface (CSI) Driver for Oracle Cloud Infrastructure (OCI) Block Storage | Persistent | Read/Write Single Pod | Yes | Snapshot, Expansion, Cloning, Topology |
Oracle Cloud Infrastructure(OCI) File Storage | fss.csi.oraclecloud.com | v1.1 | A Container Storage Interface (CSI) Driver for Oracle Cloud Infrastructure (OCI) File Storage | Persistent | Read/Write Multiple Pods | Yes | |
oVirt | csi.ovirt.org | v1.0 | A Container Storage Interface (CSI) Driver for oVirt | Persistent | Read/Write Single Pod | Yes | Block, File Storage |
Portworx | pxd.portworx.com | v1.4 | A Container Storage Interface (CSI) Driver for Portworx | Persistent and Ephemeral | Read/Write Multiple Pods | Yes | Snapshot, Expansion, Raw Block, Cloning |
Proxmox | csi.proxmox.sinextra.dev | v1.9 | A Container Storage Interface (CSI) Driver for Proxmox | Persistent | Read/Write Single Pod | Yes | Expansion, Topology, Raw Block |
QingCloud CSI | disk.csi.qingcloud.com | v1.1 | A Container Storage Interface (CSI) Driver for QingCloud Block Storage | Persistent | Read/Write Single Pod | Yes | Raw Block, Snapshot, Expansion, Cloning |
QingStor CSI | neonsan.csi.qingstor.com | v0.3, v1.1 | A Container Storage Interface (CSI) Driver for NeonSAN storage system | Persistent | Read/Write Multiple Pods | Yes | Raw Block, Snapshot, Expansion, Cloning |
Qiniu Kodo CSI | kodoplugin.storage.qiniu.com | v1.6 | A Container Storage Interface (CSI) Driver for Qiniu Object Storage (Kodo) | Persistent | Read/Write Multiple Pods | Yes | |
Quobyte | quobyte-csi | v1.3.0 | A Container Storage Interface (CSI) Driver for Quobyte | Persistent | Read/Write Multiple Pods | Yes | Expansion, Snapshots |
ROBIN | robin | v0.3, v1.0 | A Container Storage Interface (CSI) Driver for ROBIN | Persistent | Read/Write Multiple Pods | Yes | Raw Block, Snapshot, Expansion, Cloning |
SandStone | csi-sandstone-plugin | v1.0 | A Container Storage Interface (CSI) Driver for SandStone USP | Persistent | Read/Write Multiple Pods | Yes | Raw Block, Snapshot, Expansion, Cloning |
Sangfor-EDS-File-Storage | eds.csi.file.sangfor.com | v1.0 | A Container Storage Interface (CSI) Driver for Sangfor Distributed File Storage(EDS) | Persistent | Read/Write Multiple Pods | Yes | |
Sangfor-EDS-Block-Storage | eds.csi.block.sangfor.com | v1.0 | A Container Storage Interface (CSI) Driver for Sangfor Block Storage(EDS) | Persistent | Read/Write Single Pod | Yes | |
Scaleway CSI | csi.scaleway.com | v1.2.0 | Container Storage Interface (CSI) Driver for Scaleway Block Storage | Persistent | Read/Write Single Pod | Yes | Raw Block, Snapshot, Expansion, Topology |
Seagate Exos X | csi-exos-x.seagate.com | v1.3 | CSI driver for Seagate Exos X and OEM systems | Persistent | Read/Write Single Pod | Yes | Snapshot, Expansion, Cloning |
SeaweedFS | seaweedfs-csi-driver | v1.0 | A Container Storage Interface (CSI Driver for SeaweedFS) | Persistent | Read/Write Multiple Pods | Yes | |
Secrets Store CSI Driver | secrets-store.csi.k8s.io | v0.0.10 | A Container Storage Interface (CSI) Driver for mounting secrets, keys, and certs stored in enterprise-grade external secrets stores as volumes. | Ephemeral | N/A | N/A | |
SmartX | csi-smtx-plugin | v1.0 | A Container Storage Interface (CSI) Driver for SmartX ZBS Storage | Persistent | Read/Write Multiple Pods | Yes | Snapshot, Expansion |
SMB | smb.csi.k8s.io | v1.0 | This driver allows Kubernetes to access SMB Server on both Linux and Windows nodes | Persistent | Read/Write Multiple Pods | Yes | |
SODA | csi-soda-plugin | v1.0 | A Container Storage Interface (CSI) Driver for SODA | Persistent | Read/Write Single Pod | Yes | Raw Block, Snapshot |
SPDK-CSI | csi.spdk.io | v1.1 | A Container Storage Interface (CSI) Driver for SPDK | Persistent and Ephemeral | Read/Write Single Pod | Yes | |
StorageOS | storageos | v0.3, v1.0 | A Container Storage Interface (CSI) Driver for StorageOS | Persistent | Read/Write Multiple Pods | Yes | |
Storidge | csi.cio.storidge.com | v0.3, v1.0 | A Container Storage Interface (CSI) Driver for Storidge CIO | Persistent | Read/Write Multiple Pods | Yes | Snapshot, Expansion |
StorPool | csi-driver.storpool.com | v1.0 | A Container Storage Interface (CSI) Driver for StorPool | Persistent and Ephemeral | Read/Write Multiple Pods | Yes | Expansion |
Synology | csi.san.synology.com | v1.0 | A Container Storage Interface (CSI) Driver for Synology NAS | Persistent | Read/Write Multiple Pods | Yes | Snapshot, Expansion, Cloning |
Tencent Cloud Block Storage | com.tencent.cloud.csi.cbs | v1.0 | A Container Storage Interface (CSI) Driver for Tencent Cloud Block Storage | Persistent | Read/Write Single Pod | Yes | Snapshot |
Tencent Cloud File Storage | com.tencent.cloud.csi.cfs | v1.0 | A Container Storage Interface (CSI) Driver for Tencent Cloud File Storage | Persistent | Read/Write Multiple Pods | Yes | |
Tencent Cloud Object Storage | com.tencent.cloud.csi.cosfs | v1.0 | A Container Storage Interface (CSI) Driver for Tencent Cloud Object Storage | Persistent | Read/Write Multiple Pods | No | |
TopoLVM | topolvm.io | v1.1 | A Container Storage Interface (CSI) Driver for LVM | Persistent | Read/Write Single Node | Yes | Raw Block, Expansion, Topology, Snapshot, Cloning, Storage Capacity Tracking |
Toyou CSI | csi.toyou.com | v1.9 | A Container Storage Interface (CSI) Driver for Toyou Storage | Persistent | Read/Write Multiple Pods | Yes | |
TrueNAS | csi.hpe.com | v1.3 | A community supported Container Storage Provider (CSP) that leverages the HPE CSI Driver for Kubernetes. Works with TrueNAS CORE, TrueNAS SCALE and FreeNAS using iSCSI only | Persistent | Read/Write Multiple Pods | Yes | Raw Block, Snapshot, Expansion, Cloning |
VAST Data | csi.vastdata.com | v1.2 | A Container Storage Interface (CSI) Driver for VAST Data | Persistent and Ephemeral | Read/Write Multiple Pods | Yes | Snapshot, Expansion |
XSKY-EBS | csi.block.xsky.com | v1.0 | A Container Storage Interface (CSI) Driver for XSKY Distributed Block Storage (X-EBS) | Persistent | Read/Write Single Pod | Yes | Raw Block, Snapshot, Expansion, Cloning |
XSKY-FS | csi.fs.xsky.com | v1.0 | A Container Storage Interface (CSI) Driver for XEDP,XEUS,XUDS,XGFS,X3000,X5000 | Persistent | Read/Write Multiple Pods | Yes | Snapshot, Expansion |
Vault | secrets.csi.kubevault.com | v1.0 | A Container Storage Interface (CSI) Driver for mounting HashiCorp Vault secrets as volumes. | Ephemeral | N/A | N/A | |
VDA | csi.vda.io | v1.0 | An open source block storage system base on SPDK | Persistent | Read/Write Single Pod | N/A | |
Veritas InfoScale Volumes | org.veritas.infoscale | v1.2 | A Container Storage Interface (CSI) Driver for Veritas InfoScale volumes | Persistent | Read/Write Multiple Pods | Yes | Snapshot, Expansion, Cloning |
vSphere | csi.vsphere.vmware.com | v1.4 | A Container Storage Interface (CSI) Driver for VMware vSphere | Persistent | Read/Write Single Pod (Block Volume) Read/Write Multiple Pods (File Volume) | Yes | Raw Block, Expansion (Block Volume), Topology Aware (Block Volume), Snapshot (Block Volume) |
Vultr Block Storage | block.csi.vultr.com | v1.2 | A Container Storage Interface (CSI) Driver for Vultr Block Storage | Persistent | Read/Write Single Pod | Yes | |
WekaIO | csi.weka.io | v1.0 | A Container Storage Interface (CSI) Driver for mounting WekaIO WekaFS filesystem as volumes | Persistent | Read/Write Multiple Pods | Yes | |
Yandex.Cloud | yandex.csi.flant.com | v1.2 | A Container Storage Interface (CSI) plugin for Yandex.Cloud Compute Disks | Persistent | Read/Write Single Pod | Yes | |
YanRongYun | ? | v1.0 | A Container Storage Interface (CSI) Driver for YanRong YRCloudFile Storage | Persistent | Read/Write Multiple Pods | Yes | |
Zadara-CSI | csi.zadara.com | v1.0, v1.1 | A Container Storage Interface (CSI) plugin for Zadara VPSA Storage Array & VPSA All-Flash | Persistent | Read/Write Multiple Pods | Yes | Raw Block, Snapshot, Expansion, Cloning |
Sample Drivers
Name | Status | More Information |
---|---|---|
Flexvolume | Sample | |
HostPath | v1.2.0 | Only use for a single node tests. See the Example page for Kubernetes-specific instructions. |
ImagePopulator | Prototype | Driver that lets you use a container image as an ephemeral volume. |
In-memory Sample Mock Driver | v0.3.0 | The sample mock driver used for csi-sanity |
Synology NAS | v1.0.0 | An unofficial (and unsupported) Container Storage Interface Driver for Synology NAS. |
VFS Driver | Released | A CSI plugin that provides a virtual file system. |
API Reference
The following is the list of CSI APIs:
Volume Snapshot
Packages:
snapshot.storage.k8s.io/v1
Resource Types:VolumeSnapshot
VolumeSnapshot is a user’s request for either creating a point-in-time snapshot of a persistent volume, or binding to a pre-existing snapshot.
Field | Description | ||||
---|---|---|---|---|---|
apiVersion string |
snapshot.storage.k8s.io/v1
|
||||
kind string |
VolumeSnapshot |
||||
metadata Kubernetes meta/v1.ObjectMeta |
(Optional)
Standard object’s metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata Refer to the Kubernetes API documentation for the fields of themetadata field.
|
||||
spec VolumeSnapshotSpec |
spec defines the desired characteristics of a snapshot requested by a user. More info: https://kubernetes.io/docs/concepts/storage/volume-snapshots#volumesnapshots Required.
|
||||
status VolumeSnapshotStatus |
(Optional)
status represents the current information of a snapshot. Consumers must verify binding between VolumeSnapshot and VolumeSnapshotContent objects is successful (by validating that both VolumeSnapshot and VolumeSnapshotContent point at each other) before using this object. |
VolumeSnapshotClass
VolumeSnapshotClass specifies parameters that a underlying storage system uses when creating a volume snapshot. A specific VolumeSnapshotClass is used by specifying its name in a VolumeSnapshot object. VolumeSnapshotClasses are non-namespaced
Field | Description |
---|---|
apiVersion string |
snapshot.storage.k8s.io/v1
|
kind string |
VolumeSnapshotClass |
metadata Kubernetes meta/v1.ObjectMeta |
(Optional)
Standard object’s metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata Refer to the Kubernetes API documentation for the fields of themetadata field.
|
driver string |
driver is the name of the storage driver that handles this VolumeSnapshotClass. Required. |
parameters map[string]string |
(Optional)
parameters is a key-value map with storage driver specific parameters for creating snapshots. These values are opaque to Kubernetes. |
deletionPolicy DeletionPolicy |
deletionPolicy determines whether a VolumeSnapshotContent created through the VolumeSnapshotClass should be deleted when its bound VolumeSnapshot is deleted. Supported values are “Retain” and “Delete”. “Retain” means that the VolumeSnapshotContent and its physical snapshot on underlying storage system are kept. “Delete” means that the VolumeSnapshotContent and its physical snapshot on underlying storage system are deleted. Required. |
VolumeSnapshotContent
VolumeSnapshotContent represents the actual “on-disk” snapshot object in the underlying storage system
Field | Description | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
apiVersion string |
snapshot.storage.k8s.io/v1
|
||||||||||||
kind string |
VolumeSnapshotContent |
||||||||||||
metadata Kubernetes meta/v1.ObjectMeta |
(Optional)
Standard object’s metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata Refer to the Kubernetes API documentation for the fields of themetadata field.
|
||||||||||||
spec VolumeSnapshotContentSpec |
spec defines properties of a VolumeSnapshotContent created by the underlying storage system. Required.
|
||||||||||||
status VolumeSnapshotContentStatus |
(Optional)
status represents the current information of a snapshot. |
DeletionPolicy
(string
alias)
(Appears on:VolumeSnapshotClass, VolumeSnapshotContentSpec)
DeletionPolicy describes a policy for end-of-life maintenance of volume snapshot contents
Value | Description |
---|---|
"Delete" |
volumeSnapshotContentDelete means the snapshot will be deleted from the underlying storage system on release from its volume snapshot. |
"Retain" |
volumeSnapshotContentRetain means the snapshot will be left in its current state on release from its volume snapshot. |
VolumeSnapshotContentSource
(Appears on:VolumeSnapshotContentSpec)
VolumeSnapshotContentSource represents the CSI source of a snapshot. Exactly one of its members must be set. Members in VolumeSnapshotContentSource are immutable.
Field | Description |
---|---|
volumeHandle string |
(Optional)
volumeHandle specifies the CSI “volume_id” of the volume from which a snapshot should be dynamically taken from. This field is immutable. |
snapshotHandle string |
(Optional)
snapshotHandle specifies the CSI “snapshot_id” of a pre-existing snapshot on the underlying storage system for which a Kubernetes object representation was (or should be) created. This field is immutable. |
VolumeSnapshotContentSpec
(Appears on:VolumeSnapshotContent)
VolumeSnapshotContentSpec is the specification of a VolumeSnapshotContent
Field | Description |
---|---|
volumeSnapshotRef Kubernetes core/v1.ObjectReference |
volumeSnapshotRef specifies the VolumeSnapshot object to which this VolumeSnapshotContent object is bound. VolumeSnapshot.Spec.VolumeSnapshotContentName field must reference to this VolumeSnapshotContent’s name for the bidirectional binding to be valid. For a pre-existing VolumeSnapshotContent object, name and namespace of the VolumeSnapshot object MUST be provided for binding to happen. This field is immutable after creation. Required. |
deletionPolicy DeletionPolicy |
deletionPolicy determines whether this VolumeSnapshotContent and its physical snapshot on the underlying storage system should be deleted when its bound VolumeSnapshot is deleted. Supported values are “Retain” and “Delete”. “Retain” means that the VolumeSnapshotContent and its physical snapshot on underlying storage system are kept. “Delete” means that the VolumeSnapshotContent and its physical snapshot on underlying storage system are deleted. For dynamically provisioned snapshots, this field will automatically be filled in by the CSI snapshotter sidecar with the “DeletionPolicy” field defined in the corresponding VolumeSnapshotClass. For pre-existing snapshots, users MUST specify this field when creating the VolumeSnapshotContent object. Required. |
driver string |
driver is the name of the CSI driver used to create the physical snapshot on the underlying storage system. This MUST be the same as the name returned by the CSI GetPluginName() call for that driver. Required. |
volumeSnapshotClassName string |
(Optional)
name of the VolumeSnapshotClass from which this snapshot was (or will be) created. Note that after provisioning, the VolumeSnapshotClass may be deleted or recreated with different set of values, and as such, should not be referenced post-snapshot creation. |
source VolumeSnapshotContentSource |
source specifies whether the snapshot is (or should be) dynamically provisioned or already exists, and just requires a Kubernetes object representation. This field is immutable after creation. Required. |
sourceVolumeMode Kubernetes core/v1.PersistentVolumeMode |
(Optional)
SourceVolumeMode is the mode of the volume whose snapshot is taken. Can be either “Filesystem” or “Block”. If not specified, it indicates the source volume’s mode is unknown. This field is immutable. This field is an alpha field. |
VolumeSnapshotContentStatus
(Appears on:VolumeSnapshotContent)
VolumeSnapshotContentStatus is the status of a VolumeSnapshotContent object Note that CreationTime, RestoreSize, ReadyToUse, and Error are in both VolumeSnapshotStatus and VolumeSnapshotContentStatus. Fields in VolumeSnapshotStatus are updated based on fields in VolumeSnapshotContentStatus. They are eventual consistency. These fields are duplicate in both objects due to the following reasons: - Fields in VolumeSnapshotContentStatus can be used for filtering when importing a volumesnapshot. - VolumsnapshotStatus is used by end users because they cannot see VolumeSnapshotContent. - CSI snapshotter sidecar is light weight as it only watches VolumeSnapshotContent object, not VolumeSnapshot object.
Field | Description |
---|---|
snapshotHandle string |
(Optional)
snapshotHandle is the CSI “snapshot_id” of a snapshot on the underlying storage system. If not specified, it indicates that dynamic snapshot creation has either failed or it is still in progress. |
creationTime int64 |
(Optional)
creationTime is the timestamp when the point-in-time snapshot is taken
by the underlying storage system.
In dynamic snapshot creation case, this field will be filled in by the
CSI snapshotter sidecar with the “creation_time” value returned from CSI
“CreateSnapshot” gRPC call.
For a pre-existing snapshot, this field will be filled with the “creation_time”
value returned from the CSI “ListSnapshots” gRPC call if the driver supports it.
If not specified, it indicates the creation time is unknown.
The format of this field is a Unix nanoseconds time encoded as an int64.
On Unix, the command |
restoreSize int64 |
(Optional)
restoreSize represents the complete size of the snapshot in bytes. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the “size_bytes” value returned from CSI “CreateSnapshot” gRPC call. For a pre-existing snapshot, this field will be filled with the “size_bytes” value returned from the CSI “ListSnapshots” gRPC call if the driver supports it. When restoring a volume from this snapshot, the size of the volume MUST NOT be smaller than the restoreSize if it is specified, otherwise the restoration will fail. If not specified, it indicates that the size is unknown. |
readyToUse bool |
readyToUse indicates if a snapshot is ready to be used to restore a volume. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the “ready_to_use” value returned from CSI “CreateSnapshot” gRPC call. For a pre-existing snapshot, this field will be filled with the “ready_to_use” value returned from the CSI “ListSnapshots” gRPC call if the driver supports it, otherwise, this field will be set to “True”. If not specified, it means the readiness of a snapshot is unknown. |
error VolumeSnapshotError |
(Optional)
error is the last observed error during snapshot creation, if any. Upon success after retry, this error field will be cleared. |
VolumeSnapshotError
(Appears on:VolumeSnapshotContentStatus, VolumeSnapshotStatus)
VolumeSnapshotError describes an error encountered during snapshot creation.
Field | Description |
---|---|
time Kubernetes meta/v1.Time |
(Optional)
time is the timestamp when the error was encountered. |
message string |
(Optional)
message is a string detailing the encountered error during snapshot creation if specified. NOTE: message may be logged, and it should not contain sensitive information. |
VolumeSnapshotSource
(Appears on:VolumeSnapshotSpec)
VolumeSnapshotSource specifies whether the underlying snapshot should be dynamically taken upon creation or if a pre-existing VolumeSnapshotContent object should be used. Exactly one of its members must be set. Members in VolumeSnapshotSource are immutable.
Field | Description |
---|---|
persistentVolumeClaimName string |
(Optional)
persistentVolumeClaimName specifies the name of the PersistentVolumeClaim object representing the volume from which a snapshot should be created. This PVC is assumed to be in the same namespace as the VolumeSnapshot object. This field should be set if the snapshot does not exists, and needs to be created. This field is immutable. |
volumeSnapshotContentName string |
(Optional)
volumeSnapshotContentName specifies the name of a pre-existing VolumeSnapshotContent object representing an existing volume snapshot. This field should be set if the snapshot already exists and only needs a representation in Kubernetes. This field is immutable. |
VolumeSnapshotSpec
(Appears on:VolumeSnapshot)
VolumeSnapshotSpec describes the common attributes of a volume snapshot.
Field | Description |
---|---|
source VolumeSnapshotSource |
source specifies where a snapshot will be created from. This field is immutable after creation. Required. |
volumeSnapshotClassName string |
(Optional)
VolumeSnapshotClassName is the name of the VolumeSnapshotClass requested by the VolumeSnapshot. VolumeSnapshotClassName may be left nil to indicate that the default SnapshotClass should be used. A given cluster may have multiple default Volume SnapshotClasses: one default per CSI Driver. If a VolumeSnapshot does not specify a SnapshotClass, VolumeSnapshotSource will be checked to figure out what the associated CSI Driver is, and the default VolumeSnapshotClass associated with that CSI Driver will be used. If more than one VolumeSnapshotClass exist for a given CSI Driver and more than one have been marked as default, CreateSnapshot will fail and generate an event. Empty string is not allowed for this field. |
VolumeSnapshotStatus
(Appears on:VolumeSnapshot)
VolumeSnapshotStatus is the status of the VolumeSnapshot Note that CreationTime, RestoreSize, ReadyToUse, and Error are in both VolumeSnapshotStatus and VolumeSnapshotContentStatus. Fields in VolumeSnapshotStatus are updated based on fields in VolumeSnapshotContentStatus. They are eventual consistency. These fields are duplicate in both objects due to the following reasons: - Fields in VolumeSnapshotContentStatus can be used for filtering when importing a volumesnapshot. - VolumsnapshotStatus is used by end users because they cannot see VolumeSnapshotContent. - CSI snapshotter sidecar is light weight as it only watches VolumeSnapshotContent object, not VolumeSnapshot object.
Field | Description |
---|---|
boundVolumeSnapshotContentName string |
(Optional)
boundVolumeSnapshotContentName is the name of the VolumeSnapshotContent object to which this VolumeSnapshot object intends to bind to. If not specified, it indicates that the VolumeSnapshot object has not been successfully bound to a VolumeSnapshotContent object yet. NOTE: To avoid possible security issues, consumers must verify binding between VolumeSnapshot and VolumeSnapshotContent objects is successful (by validating that both VolumeSnapshot and VolumeSnapshotContent point at each other) before using this object. |
creationTime Kubernetes meta/v1.Time |
(Optional)
creationTime is the timestamp when the point-in-time snapshot is taken by the underlying storage system. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the “creation_time” value returned from CSI “CreateSnapshot” gRPC call. For a pre-existing snapshot, this field will be filled with the “creation_time” value returned from the CSI “ListSnapshots” gRPC call if the driver supports it. If not specified, it may indicate that the creation time of the snapshot is unknown. |
readyToUse bool |
(Optional)
readyToUse indicates if the snapshot is ready to be used to restore a volume. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the “ready_to_use” value returned from CSI “CreateSnapshot” gRPC call. For a pre-existing snapshot, this field will be filled with the “ready_to_use” value returned from the CSI “ListSnapshots” gRPC call if the driver supports it, otherwise, this field will be set to “True”. If not specified, it means the readiness of a snapshot is unknown. |
restoreSize k8s.io/apimachinery/pkg/api/resource.Quantity |
(Optional)
restoreSize represents the minimum size of volume required to create a volume from this snapshot. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the “size_bytes” value returned from CSI “CreateSnapshot” gRPC call. For a pre-existing snapshot, this field will be filled with the “size_bytes” value returned from the CSI “ListSnapshots” gRPC call if the driver supports it. When restoring a volume from this snapshot, the size of the volume MUST NOT be smaller than the restoreSize if it is specified, otherwise the restoration will fail. If not specified, it indicates that the size is unknown. |
error VolumeSnapshotError |
(Optional)
error is the last observed error during snapshot creation, if any. This field could be helpful to upper level controllers(i.e., application controller) to decide whether they should continue on waiting for the snapshot to be created based on the type of error reported. The snapshot controller will keep retrying when an error occurs during the snapshot creation. Upon success, this error field will be cleared. |
Generated with gen-crd-api-reference-docs
on git commit b20011c8
.
Troubleshooting
Known Issues
- [minikube-3378]: Volume mount causes minikube VM to become corrupted
Common Errors
Node plugin pod does not start with RunContainerError status
kubectl describe pod your-nodeplugin-pod
shows:
failed to start container "your-driver": Error response from daemon:
linux mounts: Path /var/lib/kubelet/pods is mounted on / but it is not a shared mount
Your Docker host is not configured to allow shared mounts. Take a look at this page for instructions to enable them.
External attacher can't find VolumeAttachments
If you have a Kubernetes 1.9 cluster, not being able to list VolumeAttachment
and the following error are due to the lack of the
storage.k8s.io/v1alpha1=true
runtime configuration:
$ kubectl logs csi-pod external-attacher
...
I0306 16:34:50.976069 1 reflector.go:240] Listing and watching *v1alpha1.VolumeAttachment from github.com/kubernetes-csi/external-attacher/vendor/k8s.io/client-go/informers/factory.go:86
E0306 16:34:50.992034 1 reflector.go:205] github.com/kubernetes-csi/external-attacher/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1alpha1.VolumeAttachment: the server could not find the requested resource
...
Problems with the external components
The external components images are under active development. It can happen that they become incompatible with each other. If the issues above have been ruled out, contact the sig-storage team and/or run the e2e test:
go run hack/e2e.go -- --provider=local --test --test_args="--ginkgo.focus=Feature:CSI"