0

I have statefulset which is deployed accross 2 different nodepools in AKS. I have total of 4 replicas, 1 on nodepool1 and 3 on nodepool2. I need only 3 to be on nodepool2 and scale it down to just 3 replicas. Is it possible to do? I tried manually to cordon and drain ones on nodepool1 but statefulset refuses to scale down since pod0 is on nodepool1 and refuses to drained from there. See picture below what happened when I put a taint on nodepool1 to prevent replica from running there but pod refused to be evicted enter image description here

3
  • Is this a one time scaling operation? or you need this as a long term solution? Commented Dec 24, 2024 at 4:02
  • Could you try to taint nodepool1? it should prevent new pods from being scheduled. Update the StatefulSet to include a nodeAffinity rule to target only nodepool2. Manually evict pods from nodepool1 and adjust the replica count i.e scale it down to 3. Now check using kubectl get pods -o wide I think what you are asking will work like this, later if and when you want nodepool1 just untaint it. Commented Dec 24, 2024 at 4:11
  • Tried all those, none worked, see image above. Commented Dec 24, 2024 at 16:27

2 Answers 2

0

If you have a StatefulSet deployed across two node pools (nodepool1 and nodepool2) in your aks cluster

enter image description here

and you want to scale it down to 3 replicas, you can taint nodepool1's nodes to prevent any pods from being scheduled on it.

kubectl taint nodes <node-name> nodepool=nodepool1:NoSchedule

enter image description here

Edit your Stateful Set yaml to include a nodeAffinity rule to ensure all pods are rescheduled to nodepool2.

affinity:
  nodeAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
      nodeSelectorTerms:
      - matchExpressions:
        - key: agentpool
          operator: In
          values:
          - nodepool2

enter image description here

Cordon the drain the nodes to prevent new pods from being scheduled

kubectl cordon <node-name>

enter image description here

enter image description here

Once all pods are running on nodepool2, scale down the StatefulSet to 3 replicas

kubectl scale statefulset <statefulset-name> --replicas=3

Verify the pods now, it should show 3 replicas and all on nodepool2

kubectl get pods -o wide

enter image description here

StatefulSets do not delete PersistentVolumeClaims (PVCs) for scaled-down replicas automatically so you gotta delete those yourself and then check

kubectl get pvc

looks good

enter image description here

Relevant MS docs-

Sign up to request clarification or add additional context in comments.

3 Comments

I tried that but problem is that pod which was on nodepool1 was first pod which was deployed of statefulset and hence was named pod0, and when scaling happened it was stucking in pending even though taint was placed on nodepool1 to prevent node from runing there, evicting it from nodepool1 did not do any good, it just stuck there in pending state. Please see updated original question what it looks like.
will look into it and share with you my findings
Could you check if your Persistent Volume is pinned to a specific availability zone that the new node pool doesn’t have. For example, if your disk was provisioned in eastus2-1 but your new node pool is only in eastus2-2, the volume can’t attach across zones. The pod will stay stuck in pending forever.
0

Label nodes in your node pools:

NodePool1: nodepool=nodepool1
NodePool2: nodepool=nodepool2

Update the StatefulSet to add affinity to force Pods to run only on nodepool2.

spec:
  template:
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: nodepool
                    operator: In
                    values:
                      - nodepool2

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.