I am working with an EKS cluster with one worker node group that has 1 EC2 instance deployed in 2 private subnets of the VPC. EKS and Worker node are deployed in the same VPC. Also, the EKS cluster is deployed in 2 public subnets and 2 private subnets.
I am trying to install MySQL via Helm charts and link the pod to the node in the EKS cluster, but somehow, after endless tries, it is still failing:
Commands I used:
kubectl get nodes
kubectl label node ip-<someip>.ec2.internal app=mysql-node
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update
helm install mysql bitnami/mysql \
-n mysql-db \
-f values.yaml
Values of the YAML file are as follows:
[root@ip-192-168-71-83 ~]# cat values.yaml
auth:
rootPassword: "admin"
database: "benchmarking_db"
username: "admin"
persistence:
enabled: true
size: 8Gi
storageClass: "gp2"
nodeSelector:
app: mysql-node
service:
type: ClusterIP
My PVC details are as follows:
[root@ip-192-168-71-83 ~]# kubectl describe pvc data-my-mysql-0 -n mysql-db
Name: data-my-mysql-0
Namespace: mysql-db
StorageClass:
Status: Pending
Volume:
Labels: app.kubernetes.io/component=primary app.kubernetes.io/instance=my-mysql
app.kubernetes.io/name=mysql
app.kubernetes.io/part-of=mysql
Annotations: <none>
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Used By: my-mysql-0
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal FailedBinding 13s (x21 over 5m12s) persistentvolume-controller no persistent volumes available for this claim and no storage class is set
Pods and other details are as follows:
[root@ip-192-168-71-83 ~]# kubectl get pvc,pods -n mysql-db
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGEpersistentvolumeclaim/data-my-mysql-0 Pending <unset> 13m
NAME READY STATUS RESTARTS AGE
pod/my-mysql-0 0/1 Pending 0 13m
[root@ip-192-168-71-83 ~]# kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
gp2 kubernetes.io/aws-ebs Delete WaitForFirstConsumer false 24h
So, where am I exactly going wrong? My worker node has a 200 GB EBS volume attached.