|Status||Min K8s Version||Max K8s Version||external-provisioner Version|
Some storage systems expose volumes that are not equally accessible by all nodes in a Kubernetes cluster. Instead volumes may be constrained to some subset of node(s) in the cluster. The cluster may be segmented into, for example, “racks” or “regions” and “zones” or some other grouping, and a given volume may be accessible only from one of those groups.
To enable orchestration systems, like Kubernetes, to work well with storage systems which expose volumes that are not equally accessible by all nodes, the CSI spec enables:
- Ability for a CSI Driver to opaquely specify where a particular node exists (e.g. "node A" is in "zone 1").
- Ability for Kubernetes (users or components) to influence where a volume is provisioned (e.g. provision new volume in either "zone 1" or "zone 2").
- Ability for a CSI Driver to opaquely specify where a particular volume exists (e.g. "volume X" is accessible by all nodes in "zone 1" and "zone 2").
Kubernetes and the external-provisioner use these abilities to make intelligent scheduling and provisioning decisions (that Kubernetes can both influence and act on topology information for each volume),
To support topology in a CSI driver, the following must be implemented:
- The plugin must fill in
NodeGetInfoResponse. This information will be used to populate the Kubernetes CSINode object and add the topology labels to the Node object.
CreateVolume, the topology information will get passed in through
In the StorageClass object, both
volumeBindingMode values of
WaitForFirstConsumer are supported.
Immediateis set, then the external-provisioner will pass in all available topologies in the cluster for the driver.
WaitForFirstConsumeris set, then the external-provisioner will wait for the scheduler to pick a node. The topology of that selected node will then be set as the first entry in
CreateVolumeRequest.accessibility_requirements.preferred. All remaining topologies are still included in the
preferredfields to support storage systems that span across multiple topologies.
The topology feature requires the external-provisioner sidecar with the Topology feature gate enabled:
In the Kubernetes cluster the
CSINodeInfo feature must be enabled on both Kubernetes master and nodes (refer to the CSINode Object section for more info):
In order to fully function properly, all Kubernetes master and nodes must be on at least
Kubernetes 1.14. If a selected node is on a lower version, topology is ignored and not
passed to the driver during
The alpha feature in the external-provisioner is not compatible across Kubernetes versions. In addition, Kubernetes master and node version skew and upgrades are not supported.
KubeletPluginsWatcher feature gates
must be enabled on both Kubernetes master and nodes.
The CSINodeInfo CRDs also have to be manually installed in the cluster.
Note that a storage system may also have an "internal topology" different from (independent of) the topology of the cluster where workloads are scheduled. Meaning volumes exposed by the storage system are equally accessible by all nodes in the Kubernetes cluster, but the storage system has some internal topology that may influence, for example, the performance of a volume from a given node.
CSI does not currently expose a first class mechanism to influence such storage system internal topology on provisioning. Therefore, Kubernetes can not programmatically influence such topology. However, a CSI Driver may expose the ability to specify internal storage topology during volume provisioning using an opaque parameter in the
CreateVolume CSI call (CSI enables CSI Drivers to expose an arbitrary set of configuration options during dynamic provisioning by allowing opaque parameters to be passed from cluster admins to the storage plugins) -- this would enable cluster admins to be able to control the storage system internal topology during provisioning.