Sample Kubernetes Cka Questions and Answers

Certified Kubernetes Administrator(CKA) certification is to provide assurance that Kubernetes Administrators have the skills, knowledge, to perform the responsibilities of Kubernetes administrators.

Certified Kubernetes Application Developer(CKAD) certification is designed to guarantee that certification holders have the knowledge, skills, and capability to design, configure, and expose cloud-native applications for Kubernetes and also perform the responsibilities of Kubernetes application developers. Hence, it also assures that the Kubernetes Application Developer can use core primitives to build, monitor, and troubleshoot scalable applications in Kubernetes.

Prerequisites

Set configuration context

1
kubectl config use-context k8s

Questions 1

Create a new service account with the name pvviewer. Grant this Service account access to list all PersistentVolumes in the cluster by creating an appropriate cluster role called pvviewer-role and ClusterRoleBinding called pvviewer-role-binding.

Next, create a pod called pvviewer with the image: redis and serviceaccount: pvviewer in the default namespace.

Answer to question 1

Create Service account

1
kubectl create service-account pvviewer

Create cluster role

1
2
3
kubectl create clusterrole pvviewer-role \
    --verb=list \
    --resource=PersistentVolumes

Create cluster role binding

1
2
3
kubectl create clusterrolebinding pvviewer-role-binding \
    --clusterrole=pvviewer-role \
    --serviceaccount=default:pvviewer

Verify

1
2
kubectl auth can-i list PersistentVolumes \
    –as system:serviceaccount:default:pvviewer

Generate yaml for the pod

1
kubectl run pvviewer --image=redis --dry-run=client -o yaml

this

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: pvviewer
  name: pvviewer
  namespace: default
spec:
  serviceAccount: pvviewer
  containers:
    - image: redis
      name: redis
  dnsPolicy: ClusterFirst
  restartPolicy: Always

Question 2

Create a new deployment called nginx-deploy, with image nginx:1.16 and 1 replica. Record the version. Next upgrade the deployment to version 1.17 using rolling update. Make sure that the version upgrade is recorded in the resource annotation.

Answer to question 2

To generate manifest

1
2
kubectl create deployment nginx-deploy \
    --image=nginx:1.16 --dry-run=client -o yaml

This is the generated yml file

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nginx-deploy
  name: nginx-deploy
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx-deploy
  template:
    metadata:
      labels:
        app: nginx-deploy
    spec:
      containers:
        - image: nginx:1.16
          name: nginx
          ports:
            - containerPort: 80

Apply

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
kubectl apply -f nginx-deploy.yaml --record

kubectl get deployment

kubectl rollout history deployment nginx-deploy

kubectl set image deployment/nginx-deploy nginx=1.17 --record

kubectl rollout history deployment nginx-deploy

kubectl describe deployment nginx-deploy

Question 3

Create snapshot of the etcd running at https://127.0.0.1:2379. Save snapshot into /opt/etcd-snapshot.db. Use these are certificate for snapshot

1
2
3
Ca certificate: /etc/kubernetes/pki/etcd/ca.crt
Client certicate: /etc/kubernetes/pki/etcd/server.crt
client key: /etc/kubernetes/pki/etcd/server.key

and then restore from the previous ETCD backup.

Answer to question 3

1
2
3
4
5
6
ETCDCTL_API=3 etcdctl \
    --endpoints=https://127.0.0.1:2379 \
    --cert=/etc/kubernetes/pki/etcd/server.crt \
    --cacert=/etc/kubernetes/pki/etcd/ca.crt \
    --key=/etc/kubernetes/pki/etcd/server.key \
    snapshot save /opt/etcd-snapshot.db

Verify

Note: Do not perform this step in exam otherwise it may create an issue in the restoration process.

1
2
3
4
5
6
7
8
ETCDCTL_API=3 etcdctl \
    --endpoints=https://127.0.0.1:2379 \
    --cert=/etc/kubernetes/pki/etcd/server.crt \
    --cacert=/etc/kubernetes/pki/etcd/ca.crt \
    --key=/etc/kubernetes/pki/etcd/server.key \
    snapshot status /opt/etcd-snapshot.db

ETCDCTL_API=3 etcdctl --write-out=table snapshot status /opt/etcd-snapshot.db

Restore

No need to remember all the flags in the restore command:

You can do

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
sudo systemctl stop etcd

ETCDCTL_API=3 etcdctl snapshot restore -h

ETCDCTL_API=3 etcdctl snapshot restore /opt/etcd-snapshot.db \
    --endpoints=https://127.0.0.1:2379 \
    --cert=/etc/kubernetes/pki/etcd/server.crt \
    --cacert=/etc/kubernetes/pki/etcd/ca.crt \
    --key=/etc/kubernetes/pki/etcd/server.key \
    --data-dir=/var/lib/etcd \
    --initial-advertise-peer-urls=http://10.0.0.4:2380 \
    --initial-cluster=<master-name>=http://10.0.0.4:2380" \
    --initial-cluster-token="etcd-cluster" --name="<master-name>"

sudo systemctl restart etcd

Question 4

Create a Persistent Volume with the given specification.

Volume Name: pv-analytics, Storage: 100Mi, Access modes: ReadWriteMany, Host Path: /pv/data-analytics

Answer to question 4

1
vim pv.yaml

pv.yml

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-analytics
spec:
  capacity:
    storage: 100Mi
  accessModes:
    - ReadWriteMany
  hostPath:
    path: /pv/data-analytics

Then apply

1
2
3
kubectl create -f pv.yaml

kubectl get pv

Note: Research on persistent volumes kubernetes

Question 5

Taint the worker node to be Unschedulable. Once done, create a pod called dev-redis, image redis:alpine to ensure workloads are not scheduled to this worker node. Finally, create a new pod called prod-redis and image redis:alpine with toleration to be scheduled on node01.

key:env_type, value:production, operator: Equal and effect:NoSchedule

1
2
3
4
5
6
7
8
9
kubectl get nodes

kubectl taint node node01 env_type=production:NoSchedule

kubectl describe nodes node01 | grep -i taint

kubectl run dev-redis --image=redis:alpine --dyn-run=client -o yaml > pod-redis.yaml

vim prod-redis.yaml

file

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
apiVersion: v1
kind: Pod
metadata:
  name: prod-redis
spec:
  containers:
    - name: prod-redis
      image: redis:alpine
  tolerations:
    - effect: Noschedule
      key: env_type
      operator: Equal
      value: prodcution

apply

1
kubectl create -f prod-redis.yaml

Read More: Scheduling in K8s

Question 6

Set the node named worker node as unavailable and reschedule all the pods running on it. (Drain node)

Answer for question 6

1
2
3
4
5
kubectl get nodes

Kubectl drain node <worker node> --ignore-daemonsets

kubectl get nodes

Question 7

Create a Pod called non-root-pod , image: redis:alpine

runAsUser: 1000

fsGroup: 2000

Answer to question 7

file non-root-pod.yaml

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
apiVersion: v1
kind: Pod
metadata:
  name: non-root-pod
spec:
  securityContext:
    runAsUser: 1000
    fsGroup: 2000
  containers:
    - name: non-root-pod

Create

1
kubectl create -f non-root-pod.yaml

Question 8

Create a NetworkPolicy which denies all ingress traffic

Answer to question 8

File policy.yaml

1
2
3
4
5
6
7
8
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny
spec:
  podSelector: {}
  policyTypes:
  - Ingress

Apply

1
kubectl create -f policy.yaml

Read More: K8s Network Policy

Question 9

Context - You have been asked to create a new ClusterRole for a deployment pipeline and bind it to a specific ServiceAccount scoped to a specific namespace.

Task - Create a new ClusterRole named deployment-clusterrole, which only allows to create the following resource types: ✑ DeploymentStateful SetDaemonSet Create a new ServiceAccount named cicd-token in the existing namespace app-team1. Bind the new ClusterRole deployment-clusterrole to the new ServiceAccount cicd-token, limited to the namespace app-team1.

Answer to question 9

1
2
3
4
5
kubectl config use-context k8s

kubectl create clusterrole deployment-clusterrole --verb=create --resoirce=Deployment,StatefulSet,Daemonset

kubectl create clusterrolebinding deploy-b --clusterrole=deployment-clusterrole --service-account=app-team1:cicd-token

Question 10

Task - Given an existing Kubernetes cluster running version 1.22.1, upgrade all of the Kubernetes control plane and node components on the master node only to version 1.22.2. Be sure to drain the master node before upgrading it and uncordon it after the upgrade.

You are also expected to upgrade kubelet and kubectl on the master node.

Do not upgrade teh wokrer nodes, etc, the container manager, the CNI plugin, the DNS service or other addons.

Answer to Question 10

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
kubectl config use-context k8s

k get nodes

k drain mk8s-master-0 --ignode-daemonsets

k get nodes

ssh mk8s-master-0

sudo -i

apt nstall kubeadm=1.22.2 kubelet=1.22.2 kubectl=1.22.2

kubeadm upgrade plan

kubeadm upgrade apply 1.22.2

systemctl restart kubelet

exit

kubectl uncordon mk8s-master-0

k get nodes

Question 11

Task - Create a new NetworkPolicy named allow-port-from-namespace in the existing namespace fubar. Ensure that the new NetworkPolicy allows Pods in namespace internal to connect to port 9000 of Pods in namespace fubar. Further ensure that the new NetworkPolicy: ✑ does not allow access to Pods, which don’t listen on port 9000 ✑ does not allow access from Pods, which are not in namespace internal

Answer to Question 11

1
2
3
kubectl config use-context k8s

vi policy.yml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-port-from-namespace
  namespace: fubar
spec:
  podSelector: {}
  policyTypes:
    - Ingress
  ingress:
    - from:
        - namespaceSelector:
          matchLabels:
            project: myapp
      ports:
        - protocol: TCP
          port: 9000
1
2
3
4
5
kubectl label ns my-app project=my-app

kubectl describe ns my-app

kubectl create -f policy.yml

Question 12

Task - Reconfigure the existing deployment front-end and add a port specification named http exposing port 80/tcp of the existing container nginx. Create a new service named front-end-svc exposing the container port http. Configure the new service to also expose the individual Pods via a NodePort on the nodes on which they are scheduled.

Answer to question 12

1
2
3
4
5
6
7
8
9
kubectl config use-context k8s

k get deploy

k edit deploy frontend

kubectl explose deploy fron-tend --name=front-end-svc --port=80 --type=NodePort --protocol=TCP

kubectl describe svc front-end-svc

Question 13

Scale the deployment presentation to 3 pods.

Answer to question 13

1
2
3
4
5
6
7
kubectl config use-context k8s

k scale deploy presentation --replicas=3

k get deploy

k get po

Question 14

Schedule a pod as follows:

  • Name: nginx-kusc00401
  • Image: nginx
  • Node selector: disk=ssd

Answer to question 14

1
2
3
kubectl config use-context k8s

kubectl run nginx-kusc00401 --image=nginx --dry-run=client -o yaml

pod

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: nginx-kusc00401
  name: nginx-kusc00401
spec:
  containers:
    - image: nginx
      name: nginx-kusc00401
  dnsPolicy: ClusterFirst
  restartPolicy: Always
  nodeSelector:
    disk: ssd
1
kubectl apply -f pod.yml

Question 15

Check to see how many nodes are ready (not including nodes tainted NoSchedule) and write the number to /opt/KUSC00402/kusc00402.txt.

Answer to question 15

1
2
3
4
5
kubectl get nodes

echo 2 > /opt/KUSC00402/kusc00402.txt

cat /opt/KUSC00402/kusc00402.txt

Question 16

Schedule a Pod as follows:

  • Name: kucc8
  • App Containers: 2
  • Container Name/Images:
    • nginx
    • consul

Answer to question 16

1
2
3
kubectl run kucc8 --image=nginx --dry-run=client -o yaml > app2.yml

vi app2.yml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: kucc8
  name: kucc8
spec:
  containers:
    - image: nginx
      name: nginx
    - image: consul
      name: consul

Watch till they run

1
k get po

Question 17

Create a persistent volume with name app-data, of capacity 2Gi and access mode ReadOnlyMany. The type of volume is hostPath and its location is /srv/app-data.

Answer to question 17

1
2
3
kubectl config use-context k8s

vi pv1.yml

pv1.yml

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
apiVersion: v1
kind: PersistentVolume
metadata:
  name: app-data
spec:
  capacity:
    storage: 2Gi
  accessModes:
    - ReadOnlyMany
  hostPath:
    path: /srv/app-data
1
2
3
k apply -f pv1.yml

k get pv

Question 18

Monitor the logs of pod foo and:

  • Extract log lines corresponding to error file-not-found
  • Write them to /opt/KUTR00101/foo

Answer to question 18

1
2
3
4
5
kubectl config use-context k8s

kubectl get pods

kubectl logs foo | grep file-not-found > /opt/KUTR00101/foo

Question 19

Context - An existing Pod needs to be integrated into the Kubernetes built-in logging architecture (e.g. kubectl logs). Adding a streaming sidecar container is a good and common way to accomplish this requirement.

Task - Add a sidecar container named sidecar, using the busybox image, to the existing Pod big-corp-app. The new sidecar container has to run the following command:

1
/bin/sh -c "tail -n+1 -f /var/log/big-corp-app.log"

Use a Volume, mounted at /var/log, to make the log file big-corp-app.log available to the sidecar container.

Don’t modify the specification of the existing container other than adding the required volume mount

Answer to question 19

1
2
3
k get po big-corp-app -o yaml > big-corp-app.yml

vim big-corp-app.yml

big-corp-app.yml diff

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: big-corp-app
  name: big-corp-app
spec:
  containers:
    - image: busybox
      name: sidecar
      command: ["/bin/sh"]
      args: ["-c", "tail -n+1 -f /var/log/big-corp-app.log"]
      volumeMounts:
        - mountPath: /var/log
          name: logs
    - image: lfcert/monitor:latest
      name: monitor
      imagePullPolicy: Always
      env:
        - name: LOG_FILENAME
          value: /var/log/big-corp-app.log
      volumeMounts:
        - mountPath: /var/log
          name: logs
      resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always

Update then

1
2
3
kubectl apply big-corp-app.yml

kubectl get pods

Question 20

From the pod label name=overloaded-cpu, find pods running high CPU workloads and write the name of the pod consuming most CPU to the file /opt/KUTR00401/KUTR00401.txt (which already exists).

Answer to question 20

1
2
3
4
5
6
7
kubectl config use-context k8s

k top po -l name=overloaded-cpu --sort-by=cpu

echo "<pod name>" > /opt/KUTR00401/KUTR00401.txt

cat /opt/KUTR00401/KUTR00401.txt

Question 21

A Kubernetes worker node, named wk8s-node-0 is in state NotReady. Investigate why this is the case, and perform any appropriate steps to bring the node to a Ready state, ensuring that any changes are made permanent.

Answer to question 21

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
kubectl config use-context k8s

k get no

k describe no wk8s-node-0

ssh wk8s-node-0

sudo -i

systemctl enable --now kubelet

systemctl restart kubelet

systemctl status kubelet

exit

k get no
comments powered by Disqus
Citizix Ltd
Built with Hugo
Theme Stack designed by Jimmy