3

I have the following deployment in kubernetes:

 apiVersion: extensions/v1beta1
 kind: Deployment
 metadata:
   labels:
     run: hello-node
   name: hello-node
   namespace: default
 spec:
   replicas: 2
   selector:
     matchLabels:
       run: hello-node
   strategy:
     rollingUpdate:
       maxSurge: 2
       maxUnavailable: 0
     type: RollingUpdate
   template:
     metadata:
       creationTimestamp: null
       labels:
         run: hello-node
     spec:
       containers:
       - image: <image>:<tag>
         imagePullPolicy: Always
         name: hello-node
         livenessProbe:
           httpGet:
             path: /rest/hello
             port: 8081
           initialDelaySeconds: 15
           timeoutSeconds: 1
         ports:
         - containerPort: 8081
           protocol: TCP
         resources:
           requests:
             cpu: 400m
         terminationMessagePath: /dev/termination-log
       dnsPolicy: ClusterFirst
       restartPolicy: Always
       securityContext: {}
       terminationGracePeriodSeconds: 30

The issue is that when I update my deployment to let's say a new version of my image, Kubernetes will instantly kill both pods with the old image, and bring two new pods with the new image. While the new pods are booting up I experience an interruption of service.

Because of the rollingUpdate and the livenessProbe I'm expecting Kubernetes to do the following:

  1. Start one pod with the new image
  2. Wait for the new pod to be healthy based on the livenessProbe
  3. Kill one pod with the old image
  4. Repeat until all pods have been migrated

I am missing something here?

1 Answer 1

6

What you need is readinessProbe.

The default state of Liveness before the initial delay is Success, whereas the default state of Readiness before the initial delay is Failure.

If you’d like your container to be killed and restarted if a probe fails, then specify a LivenessProbe and a RestartPolicy of Always or OnFailure.

If you’d like to start sending traffic to a pod only when a probe succeeds, specify a ReadinessProbe.

See container probes for more details.

To have the rolling update behavior you described, set maxSurge to 1 (default value). This tells the Deployment to "scale up at most one more replica at a time". See docs of maxSurge for more details.

Sign up to request clarification or add additional context in comments.

5 Comments

thanks for that. THe readinessProbe definitively helped with the interruption of service. Now k8s will start 2 new pods, wait for them to be ready, and when they are it will kill the 2 old pods. This is better but I'm still not getting the rolling update behaviour. Any idea why?
Can you elaborate more on "not getting the rolling update behavior"? Note that the .strategy.rollingUpdate.maxSurge is set to 2 and maxUnavailable set to 0, this means that you expect the Deployment to have at most 4 replicas and at least 2 available replicas (since you didn't set minReadySeconds that implies 2 running replicas). So what you see is expected.
From the video on the Kubernetes website, I understood that during a rolling update, kubernetes would alternate between removing old pods and adding new ones. If there were 3 pods it would be [3,0], [2.1], [1,2], [0,3]. So there is a period where the traffic is served by both versions of the service. I understand that in the old way of doing rolling update (I.e. not with deployment object) there is a parameter to define the period between each update. Did I miss understand?
I guess you meant --update-period flag of kubectl rolling-update. If you want the new pod to wait for a while after it's ready before you consider it to be available, set minReadySeconds. If you want it to scale up/down new/old replicas one by one you can set maxSurge to 1 (the default value). It's acting as expected since you tell the Deployment to scale up 2 more at a time. See deployment's doc on maxSurge
ok many thanks, I had misunderstood what maxSurge was. Can you add details about maxSurge in the answer? Then I can accept the answer.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.