The Certified Kubernetes Administrator (CKA) certification validates that you have the skills and knowledge to perform the responsibilities of a Kubernetes administrator: cluster setup, workload deployment, networking, storage, troubleshooting, and security. The Certified Kubernetes Application Developer (CKAD) certification focuses on designing, configuring, and exposing cloud-native applications on Kubernetes and using core primitives to build, monitor, and troubleshoot applications.
This post is a practice set of sample CKA-style questions with step-by-step answers. Use it to rehearse RBAC (ServiceAccounts, ClusterRoles, ClusterRoleBindings), deployments and rollouts, etcd backup and restore, PersistentVolumes, taints and tolerations, node draining, NetworkPolicies, security contexts, and similar exam topics. Always run commands against a practice cluster (e.g. Setup Kubernetes Cluster on Ubuntu with kubeadm) and confirm the official CKA curriculum for current exam scope.
Prerequisites
- A Kubernetes cluster and
kubectlconfigured. Set the context used in the tasks (e.g.k8s):
| |
Question 1
Create a new service account with the name pvviewer. Grant this Service account access to list all PersistentVolumes in the cluster by creating an appropriate cluster role called pvviewer-role and ClusterRoleBinding called pvviewer-role-binding.
Next, create a pod called pvviewer with the image: redis and serviceaccount: pvviewer in the default namespace.
Answer to question 1
Create Service account
| |
Create cluster role
| |
Create cluster role binding
| |
Verify
| |
Generate yaml for the pod
| |
this
| |
Question 2
Create a new deployment called nginx-deploy, with image nginx:1.16 and 1 replica. Record the version. Next upgrade the deployment to version 1.17 using rolling update. Make sure that the version upgrade is recorded in the resource annotation.
Answer to question 2
To generate manifest
| |
This is the generated yml file
| |
Apply
| |
Question 3
Create snapshot of the etcd running at https://127.0.0.1:2379. Save snapshot into /opt/etcd-snapshot.db.
Use these certificates for the snapshot
| |
and then restore from the previous ETCD backup.
Answer to question 3
| |
Verify
Note: Do not perform this step in exam otherwise it may create an issue in the restoration process.
| |
Restore
No need to remember all the flags in the restore command:
You can do
| |
Question 4
Create a Persistent Volume with the given specification.
Volume Name: pv-analytics, Storage: 100Mi, Access modes: ReadWriteMany, Host Path: /pv/data-analytics
Answer to question 4
| |
pv.yml
| |
Then apply
| |
Note: Research on persistent volumes kubernetes
Question 5
Taint the worker node to be Unschedulable. Once done, create a pod called dev-redis, image redis:alpine to ensure workloads are not scheduled to this worker node. Finally, create a new pod called prod-redis and image redis:alpine with toleration to be scheduled on node01.
key:env_type, value:production, operator: Equal and effect:NoSchedule
| |
file
| |
apply
| |
Read More: Scheduling in K8s
Question 6
Set the node named worker node as unavailable and reschedule all the pods running on it. (Drain node)
Answer for question 6
| |
Question 7
Create a Pod called non-root-pod , image: redis:alpine
runAsUser: 1000
fsGroup: 2000
Answer to question 7
file non-root-pod.yaml
| |
Create
| |
Question 8
Create a NetworkPolicy which denies all ingress traffic
Answer to question 8
File policy.yaml:
| |
Apply
| |
Read More: K8s Network Policy
Question 9
Context -
You have been asked to create a new ClusterRole for a deployment pipeline and bind it to a specific ServiceAccount scoped to a specific namespace.
Task -
Create a new ClusterRole named deployment-clusterrole, which only allows to create the following resource types:
✑ Deployment
✑ Stateful Set
✑ DaemonSet
Create a new ServiceAccount named cicd-token in the existing namespace app-team1.
Bind the new ClusterRole deployment-clusterrole to the new ServiceAccount cicd-token, limited to the namespace app-team1.
Answer to question 9
| |
Question 10
Task -
Given an existing Kubernetes cluster running version 1.22.1, upgrade all of the Kubernetes control plane and node components on the master node only to version 1.22.2.
Be sure to drain the master node before upgrading it and uncordon it after the upgrade.
You are also expected to upgrade kubelet and kubectl on the master node.
Do not upgrade the worker nodes, etcd, the container runtime, the CNI plugin, the DNS service, or other add-ons.
Answer to Question 10
| |
Question 11
Task -
Create a new NetworkPolicy named allow-port-from-namespace in the existing namespace fubar.
Ensure that the new NetworkPolicy allows Pods in namespace internal to connect to port 9000 of Pods in namespace fubar.
Further ensure that the new NetworkPolicy:
✑ does not allow access to Pods, which don’t listen on port 9000
✑ does not allow access from Pods, which are not in namespace internal
Answer to Question 11
| |
| |
| |
Question 12
Task -
Reconfigure the existing deployment front-end and add a port specification named http exposing port 80/tcp of the existing container nginx.
Create a new service named front-end-svc exposing the container port http.
Configure the new service to also expose the individual Pods via a NodePort on the nodes on which they are scheduled.
Answer to question 12
| |
Question 13
Scale the deployment presentation to 3 pods.
Answer to question 13
| |
Question 14
Schedule a pod as follows:
- Name:
nginx-kusc00401 - Image:
nginx - Node selector:
disk=ssd
Answer to question 14
| |
pod
| |
| |
Question 15
Check to see how many nodes are ready (not including nodes tainted NoSchedule) and write the number to /opt/KUSC00402/kusc00402.txt.
Answer to question 15
| |
Question 16
Schedule a Pod as follows:
- Name:
kucc8 - App
Containers: 2 - Container Name/Images:
- nginx
- consul
Answer to question 16
| |
| |
Watch till they run
| |
Question 17
Create a persistent volume with name app-data, of capacity 2Gi and access mode ReadOnlyMany. The type of volume is hostPath and its location is /srv/app-data.
Answer to question 17
| |
pv1.yml
| |
| |
Question 18
Monitor the logs of pod foo and:
- Extract log lines corresponding to error
file-not-found - Write them to
/opt/KUTR00101/foo
Answer to question 18
| |
Question 19
Context -
An existing Pod needs to be integrated into the Kubernetes built-in logging architecture (e.g. kubectl logs). Adding a streaming sidecar container is a good and common way to accomplish this requirement.
Task -
Add a sidecar container named sidecar, using the busybox image, to the existing Pod big-corp-app. The new sidecar container has to run the following command:
| |
Use a Volume, mounted at /var/log, to make the log file big-corp-app.log available to the sidecar container.
Don’t modify the specification of the existing container other than adding the required volume mount
Answer to question 19
| |
big-corp-app.yml diff
| |
Update then
| |
Question 20
From the pod label name=overloaded-cpu, find pods running high CPU workloads and write the name of the pod consuming most CPU to the file /opt/KUTR00401/KUTR00401.txt (which already exists).
Answer to question 20
| |
Question 21
A Kubernetes worker node, named wk8s-node-0 is in state NotReady.
Investigate why this is the case, and perform any appropriate steps to bring the node to a Ready state, ensuring that any changes are made permanent.
Answer to question 21
| |
Summary
This practice set covered sample CKA-style tasks and answers:
- RBAC: ServiceAccounts, ClusterRoles, ClusterRoleBindings (Q1, Q9)
- Workloads: Pods, Deployments, rolling updates, scaling (Q2, Q7, Q12, Q13, Q14, Q16)
- Storage: PersistentVolumes, hostPath (Q4, Q17)
- Scheduling: Taints, tolerations, nodeSelector, drain, uncordon (Q5, Q6, Q10, Q15)
- etcd: Snapshot save and restore (Q3)
- Networking: NetworkPolicy for deny-all ingress and allow-from-namespace (Q8, Q11)
- Observability: Logs, sidecar,
kubectl top(Q18, Q19, Q20) - Troubleshooting: Node NotReady, kubelet (Q21)
Use a practice cluster and the official CKA curriculum to align with the current exam. For cluster setup, see Setup Kubernetes Cluster on Ubuntu with kubeadm.
Related Posts
- Setup Kubernetes Cluster on Ubuntu 20.04 with kubeadm – Build a cluster to practice on
- How to Setup Prometheus Monitoring on Kubernetes – Monitoring and observability
- Kubernetes TLS Security Hardening (Traefik & Nginx) – Security and TLS