I have statefulset which is deployed accross 2 different nodepools in AKS. I have total of 4 replicas, 1 on nodepool1 and 3 on nodepool2. I need only 3 to be on nodepool2 and scale it down to just 3 replicas. Is it possible to do? I tried manually to cordon and drain ones on nodepool1 but statefulset refuses to scale down since pod0 is on nodepool1 and refuses to drained from there.
See picture below what happened when I put a taint on nodepool1 to prevent replica from running there but pod refused to be evicted

2 Answers
If you have a StatefulSet deployed across two node pools (nodepool1 and nodepool2) in your aks cluster

and you want to scale it down to 3 replicas, you can taint nodepool1's nodes to prevent any pods from being scheduled on it.
kubectl taint nodes <node-name> nodepool=nodepool1:NoSchedule

Edit your Stateful Set yaml to include a nodeAffinity rule to ensure all pods are rescheduled to nodepool2.
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: agentpool
operator: In
values:
- nodepool2

Cordon the drain the nodes to prevent new pods from being scheduled
kubectl cordon <node-name>


Once all pods are running on nodepool2, scale down the StatefulSet to 3 replicas
kubectl scale statefulset <statefulset-name> --replicas=3
Verify the pods now, it should show 3 replicas and all on nodepool2
kubectl get pods -o wide

StatefulSets do not delete PersistentVolumeClaims (PVCs) for scaled-down replicas automatically so you gotta delete those yourself and then check
kubectl get pvc
looks good

Relevant MS docs-
3 Comments
pending state. Please see updated original question what it looks like.Label nodes in your node pools:
NodePool1: nodepool=nodepool1
NodePool2: nodepool=nodepool2
Update the StatefulSet to add affinity to force Pods to run only on nodepool2.
spec:
template:
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: nodepool
operator: In
values:
- nodepool2
kubectl get pods -o wideI think what you are asking will work like this, later if and when you want nodepool1 just untaint it.