vSphere CSI Driver - Known Issues

This section lists the major known issues with VMware vSphere CSI driver. For complete list of issues please check our Github issues page. If you notice an issue not listed in Github issues page, please do file an issue on the Github repository.

Issue 1: Filesystem resize is skipped if the original PVC is deleted when FilesystemResizePending condition is still on the PVC, but PV and its associated volume on the storage system are not deleted due to the Retain policy.

  • Impact: User may create a new PVC to statically bind to the undeleted PV. In this case, the volume on the storage system is resized but the filesystem is not resized accordingly. User may try to write to the volume whose filesystem is out of capacity.
  • Upstream issue is tracked at: https://github.com/kubernetes/kubernetes/issues/88683
  • Workaround: User can log into the container to manually resize the filesystem.

Issue 2: Volume cannot be resized in a Statefulset or other workload API such as Deployment.

Issue 3: Recover from volume expansion failure.

Issue 4: CNS file volume has a limitation of 8K for metadata.

  • Impact: It is quite possible that we will not be able to push all the metadata to CNS file share as we need support a max of 64 clients per file volume.
  • Workaround: None

Issue 5: The CSI delete volume is getting called before detach.

  • Impact: There could be a possibility of CSI getting Delete Volume before ControllerUnpublish.
  • Upstream issue is tracked at: https://github.com/kubernetes/kubernetes/issues/84226
  • Workaround:

    1. Delete the Pod with force: kubectl delete pods <pod> --grace-period=0 --force
    2. Find VolumeAttachment for the volume that remained undeleted. Get Node from this VolumeAttachment.
    3. Manually detach the disk from the Node VM.
    4. Edit this VolumeAttachment and remove the finalizer. It will get deleted.
    5. Use govc to manually delete the FCD.
    6. Edit Pending PV and remove the finalizer. It will get deleted.

Issue 6: vSphere with Kubernetes Cluster Devops can modify the volume health status of a PVC manually since the volume health annotation is not a read-only field. Devops should avoid modifying the volume health annotation manually. If DevOps modifies the volume health to a random or incorrect health status, then any software dependent on this volume health will be affected.

  • Impact:Any random volume health status set by the vSphere with Kubernetes Cluster Devops will get reflected in volume health status of PVC in Tanzu Kubernetes Grid Cluster as well.

results matching ""

    No results matching ""