Pod and Persistent volume with existing EBS in EKS

Stateful containers and Persistent volumes are not rare uses cases in the Kubernetes world, If you are running a MySQL, MongoDB, or Redis kind of setup you will definitely need a persistent volume.

Why Persistent Volumes

There are two different categories of volumes available in K8, the normal volumes, and the persistent volumes. Persistent volume comes with the added luxury of being independent of the pod that is attached to, making them completely independent from the pod’s life cycle

With the persistent volume, a simple use case is we need to deploy the stateful container with a database that needs an existing persistent volume so that your container or pod can access to the data whenever is scheduled or restarted.

Storage Class

The storage class is needed in order to specify the provisioner, the volume type, and the binding property that K8 will apply when a persistent volume claim specifies this storage class

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: standard
provisioner: kubernetes.io/aws-ebs
parameters:
  type: gp2
reclaimPolicy: Retain
mountOptions:
  - debug
volumeBindingMode: Immediate

Provision AWS EBS for PV

When you are creating the Persistent volume, make sure the

  • Worker Nodes must be AWS EC2 Instance not the Fargate
  • The volume you create needs to be in the same region and availability-zone of the worker node where you want to attach the PV

To provision the AWS Elastic Block Storage, run the below CLI or go to amazon management console and create a EBS volume under the EC2 Service

aws ec2 create-volume --region us-east-1 --availability-zone us-east-1a --size10 --volume-type gp2

The above CLI uses the region as US-East-1 and the zone ‘A’ , change the region and zone as per your setup

In the response of the above call, get the “VolumeId” which will be used while creating the persistent volume

In my setup, I am running EKS on the US-EAST-1 and the worker nodes are running on AWS EC2 instances which are handled by Auto Scaling Group

Create a persistent volume from the EBS Volume ID

Add the volume to the cluster by creating the persistent volume, below is the example of the pv.yaml file which creates the PersistentVolume by running

kubectl create -f pv.yaml
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv
spec:
  accessModes:
  - ReadWriteOnce
  awsElasticBlockStore:
    fsType: xfs
    volumeID: aws://us-east-1a/vol-xxxxxxxxxxxx
  capacity:
    storage: 10Gi
  persistentVolumeReclaimPolicy: Retain
  storageClassName: gp2-retain
  volumeMode: Filesystem

Before creating the PV, attach the EBS Volume to any of EC2 Instance and format with the filesystem, In the sample yaml i use fstype as XFS

Create Persistent Volume Chain

The next step is creating the persistent volume chain to attach that with pod, A PersistentVolumeClaim (PVC) is a request for storage by a user that will take a partition of the persistent volume. Below is the example for creating the PVC

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  labels:
    app: mysql
  name: pvc
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
  storageClassName: gp2-retain
  volumeMode: Filesystem
  volumeName: pv

The below kubectl command will create the PVC

kubectl create -f pvc.yaml

Using Persistent Volumes in a Pod

Once you have a Persistent Volume Claim you can claim it as a Volume in your Pods. Note that an EBS Volume can only be used by a single Pod at the same time. Thus, the access mode of your PVC can only be ReadWriteOnce.

Under the hood, the EBS Volume stays detached from your nodes as long as it is not claimed by a Pod. As soon as a Pod claims it, it gets attached to the node that holds the Pod.

And use the volumeMounts in the pod creation yaml, and mention the claim name as the PVC name which we created in the earlier stage.

Read here to to know how to deploy MySQL with PVC and PV on EBS Volume on EKS


Also published on Medium.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.