I have the following deployment in kubernetes:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
run: hello-node
name: hello-node
namespace: default
spec:
replicas: 2
selector:
matchLabels:
run: hello-node
strategy:
rollingUpdate:
maxSurge: 2
maxUnavailable: 0
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
run: hello-node
spec:
containers:
- image: <image>:<tag>
imagePullPolicy: Always
name: hello-node
livenessProbe:
httpGet:
path: /rest/hello
port: 8081
initialDelaySeconds: 15
timeoutSeconds: 1
ports:
- containerPort: 8081
protocol: TCP
resources:
requests:
cpu: 400m
terminationMessagePath: /dev/termination-log
dnsPolicy: ClusterFirst
restartPolicy: Always
securityContext: {}
terminationGracePeriodSeconds: 30
The issue is that when I update my deployment to let's say a new version of my image, Kubernetes will instantly kill both pods with the old image, and bring two new pods with the new image. While the new pods are booting up I experience an interruption of service.
Because of the rollingUpdate and the livenessProbe I'm expecting Kubernetes to do the following:
- Start one pod with the new image
- Wait for the new pod to be healthy based on the
livenessProbe - Kill one pod with the old image
- Repeat until all pods have been migrated
I am missing something here?