CKA exam simulator solutions study
write a command for entering context names into /opt/course/1/contexts - correct answer-k
config get-contexts -o name > /opt/course/1/contexts
write a command to display the current context - correct answer-k config current-context
write a command that displays the current context without kubectl - correct answer-cat
~/.kube/config | grep current
Create a single Pod of image httpd:2.4.41-alpine in Namespace default. The Pod should be
named pod1 and the container should be named pod1-container. This Pod should only be
scheduled on controlplane nodes. Do not add new labels to any nodes.
what steps do we do? - correct answer-1. dry run the pod with the name and image
2. in yaml file, change the following:
- container name
3. add the following:
- tolerations key: controlplane node and effect is noschedule
- 'nodeSelector': controlplane node: ""
4. create pod
There are two Pods named o3db-* in Namespace project-c13. C13 management asked you
to scale the Pods down to one replica to save resources.
what steps to take?
assume it's on a statefulset, what command do we use? - correct answer-1. in that
namespace, check for pods and use k describe to get what it's running on.
2. when we know what's running on, in this case it was a statefulset, use the below
command
- k -n project-c13 scale statefulset --replicas=1
Do the following in Namespace default. Create a single Pod named ready-if-service-ready of
image nginx:1.16.1-alpine. Configure a LivenessProbe which simply executes command
true. Also configure a ReadinessProbe which does check if the url
http://service-am-i-ready:80 is reachable, you can use wget -T2 -O-
http://service-am-i-ready:80 for this. Start the Pod and confirm it isn't ready because of the
ReadinessProbe.
Create a second Pod named am-i-ready of image nginx:1.16.1-alpine with label id:
cross-server-ready. The already existing Service service-am-i-ready should now have that
second Pod as endpoint.
Now the first Pod should be in ready state, confirm that.
what steps to take? - correct answer-1. create a pod
- k -n default run ready-if-service-ready --image=nginx:1.16.1-alpine --dry-run=client -o yaml
> ready.yaml
, 2. edit yaml file and add the following:
- livenessprobe.exec.command: 'true'
- readinessprobe.exec.command: -sh -c -wget -T2 -O- http://service-am-i-ready:80
3. create pod and confirm it's running
4. create second pod
- k run am-i-ready --image=nginx:1.16.1-alpine --labels="id=cross-server-ready"
There are various Pods in all namespaces. Write a command into /opt/course/5/find_pods.sh
which lists all Pods sorted by their AGE (metadata.creationTimestamp).
Write a second command into /opt/course/5/find_pods_uid.sh which lists all Pods sorted by
field metadata.uid. Use kubectl sorting for both commands. - correct answer-1. k get pods -A
--sort-by=metadata.creationTimestamp
2. k get pods -A --sort-by=metadata.uid
Create a new PersistentVolume named safari-pv. It should have a capacity of 2Gi,
accessMode ReadWriteOnce, hostPath /Volumes/Data and no storageClassName defined.
Next create a new PersistentVolumeClaim in Namespace project-tiger named safari-pvc . It
should request 2Gi storage, accessMode ReadWriteOnce and should not define a
storageClassName. The PVC should bound to the PV correctly.
Finally create a new Deployment safari in Namespace project-tiger which mounts that
volume at /tmp/safari-data. The Pods of that Deployment should be of image
httpd:2.4.41-alpine.
steps?
note: when using a PVC as a volume, how to list it as a volume in a pod? - correct answer-1.
vi a yaml file and paste a pv template, add the following:
- storage: 2Gi
- accessmode: ReadWriteOnce
- hostpath.path: /Volumes/Data
2. vi a yaml file and paste a PVC template, add the following
- namespace: project-tiger
- accessmode: readwriteOnce
- resource.request: 2Gi
3. k -n project-tiger create deploy safari --image=httpd:2.4.41-alpine --dry-run=client -o yaml
> deploy.yaml
- add volumemount to the container with that image
- add volume, with the claimname: safari-pvc
4. create everything
The metrics-server has been installed in the cluster. Your college would like to know the
kubectl commands to:
show Nodes resource usage
show Pods and their containers resource usage
Please write the commands into /opt/course/7/node.sh and /opt/course/7/pod.sh.
steps? - correct answer-1. k top nodes
2. k top pods --containers=true
The benefits of buying summaries with Stuvia:
Guaranteed quality through customer reviews
Stuvia customers have reviewed more than 700,000 summaries. This how you know that you are buying the best documents.
Quick and easy check-out
You can quickly pay through credit card or Stuvia-credit for the summaries. There is no membership needed.
Focus on what matters
Your fellow students write the study notes themselves, which is why the documents are always reliable and up-to-date. This ensures you quickly get to the core!
Frequently asked questions
What do I get when I buy this document?
You get a PDF, available immediately after your purchase. The purchased document is accessible anytime, anywhere and indefinitely through your profile.
Satisfaction guarantee: how does it work?
Our satisfaction guarantee ensures that you always find a study document that suits you well. You fill out a form, and our customer service team takes care of the rest.
Who am I buying these notes from?
Stuvia is a marketplace, so you are not buying this document from us, but from seller modockochieng06. Stuvia facilitates payment to the seller.
Will I be stuck with a subscription?
No, you only buy these notes for $7.99. You're not tied to anything after your purchase.