0

I have a k8s cluster with k3s. In a cron job which is supposed to backup my volumes I want to connect to the cluster via a Service Account in order to read some data about my volumes etc.

Therefore I created a Service Account:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: backup-volumes-sa
automountServiceAccountToken: false

...and added/mounted it to my cron job:

apiVersion: batch/v1
kind: CronJob
metadata:
  name: backup-volumes-cron
spec:
  schedule: "0 4 * * *"
  successfulJobsHistoryLimit: 1
  failedJobsHistoryLimit: 5
  jobTemplate:
    spec:
      template:
        spec:
          serviceAccountName: backup-volumes-sa
          automountServiceAccountToken: true
          containers:
            - name: sync
            [...]

In the job I use the Node.js k8s Client Library like this:

const kc = new k8s.KubeConfig();

if (process.env.KUBECONFIG)
    kc.loadFromFile(process.env.KUBECONFIG);
else
{
    kc.loadFromDefault();
    kc.clusters[0].skipTLSVerify = true; // For testing
}

if(process.env.PRINT_CLUSTERINFO && process.env.PRINT_CLUSTERINFO === 'true')
{
    console.log("Cluster Info:")
    console.log(kc.clusters);
    console.log(kc.users);
}

const k8sApi = kc.makeApiClient(k8s.CoreV1Api);
const res = await k8sApi.listPersistentVolumeClaimForAllNamespaces(undefined, undefined, undefined, 'include-in-backup=true');

I used the kubeconfig for testing but on my cluster I switched to .loadFromDefault() which uses the auto-mounted Service Account.

I already added the skipTLSVerify flag for testing and debugging but everything is the same without it.

The library loads the SA but and when printing the cluster info it outputs the corrent information:

Cluster Info:
[ { name: 'inCluster',
    caFile: '/var/run/secrets/kubernetes.io/serviceaccount/ca.crt',
    server: 'https://10.43.0.1:443',
    skipTLSVerify: true } ]
[ { name: 'inClusterUser',
    authProvider: { name: 'tokenFile', config: [Object] } } ]

But when executing a request to my cluster's API Node I get this error:

(node:25) UnhandledPromiseRejectionWarning: HttpError: HTTP request failed
    at Request._callback (/usr/local/bin/google-sync/node_modules/@kubernetes/client-node/dist/gen/api/coreV1Api.js:11112:36)
    at Request.self.callback (/usr/local/bin/google-sync/node_modules/request/request.js:185:22)
    at Request.emit (events.js:189:13)
    at Request.<anonymous> (/usr/local/bin/google-sync/node_modules/request/request.js:1154:10)
    at Request.emit (events.js:189:13)
    at IncomingMessage.<anonymous> (/usr/local/bin/google-sync/node_modules/request/request.js:1076:12)
    at Object.onceWrapper (events.js:277:13)
    at IncomingMessage.emit (events.js:194:15)
    at endReadableNT (_stream_readable.js:1125:12)
    at process._tickCallback (internal/process/next_tick.js:63:19)

I'm using k3s and didn't configure anything touching this topic. Does anybody know what's going wrong here?

Node Client Version: 0.20.0
k3s Version: 1.28.6+k3s2

Thank you!

EDIT: Thanks to @syed-hyder who mentioned my ServiceAccount needs a Role and a Binding. Unfortunately it doesn't work in my case. I tried it with a Role/RoleBinding as well as ClusterRole/ClusterRoleBinding:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: backup-volumes-role
rules:
  - apiGroups:
      - ""
    resources:
      - persistentvolumes
      - persistentvolumeclaims
      - persistentvolumeclaims/status
    verbs:
      - create
      - delete
      - get
      - list
      - watch
      - update
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: backup-volumes-sa
automountServiceAccountToken: false
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: backup-volumes-binding
subjects:
  - kind: ServiceAccount
    name: backup-volumes-sa
    namespace: default
roleRef:
  kind: ClusterRole
  name: backup-volumes-role
  apiGroup: rbac.authorization.k8s.io

1 Answer 1

1

Disclaimer: I am not familiar with NodeJS.

I see that you are trying to call a list on PersistentVolumeClaims on all namespaces. Have you assigned the appropriate roles and role bindings to your serviceaccount?

If not, you can quickly create a role that grants access to list pvcs in all namespaces and bind your serviceaccount to that role.

Sign up to request clarification or add additional context in comments.

6 Comments

Thank you so much! I must have overlooked it completely...! So I created a Role and a RoleBinding, doesn't work. Unfortunately it's the same with a ClusterRole and ClusterRoleBinding... But your answer is a a valuable addition! I'll add it to my question.
Can u please try executing the following command to check if the serviceaccount has access to list pvcs in all namespaces. kubectl auth can-i get pvc -A --as backup-volumes-sa
Sure! The command returns a 'no'.
I am extremely sorry for the oversight. To check the permissions related to a serviceaccount, we should be executing kubectl auth can-i list pvc --as=system:serviceaccount:default:backup-volumes-sa -A. The previous command checks the permissions for a user named backup-volumes-sa.
I guess that, it will obviously return a yes. If it returns yes, please try listing the persistentVolumeClaims within the default namespace from your code. Or else lets go step by step as mentioned in the documentation of the library by listing pods in the default namespace giving access to serviceaccount to list pods in the default namespace.
|

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.