Question: 1
Refer to Exhibit:
Task:
A pod within the Deployment named buffale-deployment and in namespace gorilla is logging errors.
1) Look at the logs identify errors messages.
Find errors, including User ''system:serviceaccount:gorilla:default'' cannot list resource ''deployment'' [...] in the namespace ''gorilla''
2) Update the Deployment buffalo-deployment to resolve the errors in the logs of the Pod.
The buffalo-deployment 'S manifest can be found at -/prompt/escargot/buffalo-deployment.yaml
Answer : A
Show Answer
Hide Answer
Question: 2
Refer to Exhibit:
Given a container that writes a log file in format A and a container that converts log files from format A to format B, create a deployment that runs both containers such that the log files from the first container are converted by the second container, emitting logs in format B.
Task:
* Create a deployment named deployment-xyz in the default namespace, that:
* Includes a primary
lfccncf/busybox:1 container, named logger-dev
* includes a sidecar Ifccncf/fluentd:v0.12 container, named adapter-zen
* Mounts a shared volume /tmp/log on both containers, which does not persist when the pod is deleted
* Instructs the logger-dev
container to run the command
which should output logs to /tmp/log/input.log in plain text format, with example values:
* The adapter-zen sidecar container should read /tmp/log/input.log and output the data to /tmp/log/output.* in Fluentd JSON format. Note that no knowledge of Fluentd is required to complete this task: all you will need to achieve this is to create the ConfigMap from the spec file provided at /opt/KDMC00102/fluentd-configma p.yaml , and mount that ConfigMap to /fluentd/etc in the adapter-zen sidecar container
Answer : A
Show Answer
Hide Answer
Question: 3
Refer to Exhibit:
Context
A project that you are working on has a requirement for persistent data to be available.
Task
To facilitate this, perform the following tasks:
* Create a file on node sk8s-node-0 at /opt/KDSP00101/data/index.html with the content Acct=Finance
* Create a PersistentVolume named task-pv-volume using hostPath and allocate 1Gi to it, specifying that the volume is at /opt/KDSP00101/data on the cluster's node. The configuration should specify the access mode of ReadWriteOnce . It should define the StorageClass name exam for the PersistentVolume , which will be used to bind PersistentVolumeClaim requests to this PersistenetVolume.
* Create a PefsissentVolumeClaim named task-pv-claim that requests a volume of at least 100Mi and specifies an access mode of ReadWriteOnce
* Create a pod that uses the PersistentVolmeClaim as a volume with a label app: my-storage-app mounting the resulting volume to a mountPath /usr/share/nginx/html inside the pod
Answer : A
Show Answer
Hide Answer
Question: 4
Refer to Exhibit:
Context
A user has reported an aopticauon is unteachable due to a failing livenessProbe .
Task
Perform the following tasks:
* Find the broken pod and store its name and namespace to /opt/KDOB00401/broken.txt in the format:
The output file has already been created
* Store the associated error events to a file /opt/KDOB00401/error.txt, The output file has already been created. You will need to use the -o wide output specifier with your command
* Fix the issue.
A Explanation:
Solution:
Create the Pod:
kubectl create -f http://k8s.io/docs/tasks/configure-pod-container/exec-liveness.yaml
Within 30 seconds, view the Pod events:
kubectl describe pod liveness-exec
The output indicates that no liveness probes have failed yet:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
24s 24s 1 {default-scheduler } Normal Scheduled Successfully assigned liveness-exec to worker0
23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Pulling pulling image 'gcr.io/google_containers/busybox'
23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Pulled Successfully pulled image 'gcr.io/google_containers/busybox'
23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Created Created container with docker id 86849c15382e; Security:[seccomp=unconfined]
23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Started Started container with docker id 86849c15382e
After 35 seconds, view the Pod events again:
kubectl describe pod liveness-exec
At the bottom of the output, there are messages indicating that the liveness probes have failed, and the containers have been killed and recreated.
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
37s 37s 1 {default-scheduler } Normal Scheduled Successfully assigned liveness-exec to worker0
36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Pulling pulling image 'gcr.io/google_containers/busybox'
36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Pulled Successfully pulled image 'gcr.io/google_containers/busybox'
36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Created Created container with docker id 86849c15382e; Security:[seccomp=unconfined]
36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Started Started container with docker id 86849c15382e
2s 2s 1 {kubelet worker0} spec.containers{liveness} Warning Unhealthy Liveness probe failed: cat: can't open '/tmp/healthy': No such file or directory
Wait another 30 seconds, and verify that the Container has been restarted:
kubectl get pod liveness-exec
The output shows thatRESTARTShas been incremented:
NAME READY STATUS RESTARTS AGE
liveness-exec 1/1 Running 1 m
Answer : A
Show Answer
Hide Answer
Question: 5
Refer to Exhibit:
Task
You have rolled out a new pod to your infrastructure and now you need to allow it to communicate with the web and storage pods but nothing else. Given the running pod kdsn00201 -newpod edit it to use a network policy that will allow it to send and receive traffic only to and from the web and storage pods.
A Explanation:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: internal-policy
namespace: default
spec:
podSelector:
matchLabels:
name: internal
policyTypes:
- Egress
- Ingress
ingress:
- {}
egress:
- to:
- podSelector:
matchLabels:
name: mysql
ports:
- protocol: TCP
port: 3306
- to:
- podSelector:
matchLabels:
name: payroll
ports:
- protocol: TCP
port: 8080
- ports:
- port: 53
protocol: UDP
- port: 53
protocol: TCP
Answer : A
Show Answer
Hide Answer