Question: 1
Context
A default-deny NetworkPolicy avoids to accidentally expose a Pod in a namespace that doesn't have any other NetworkPolicy defined.
Task
Create a new default-deny NetworkPolicy named defaultdeny in the namespace testing for all traffic of type Egress.
The new NetworkPolicy must deny all Egress traffic in the namespace testing.
Apply the newly created default-deny NetworkPolicy to all Pods running in namespace testing.
Answer : A
Show Answer
Hide Answer
Question: 2
Enable audit logs in the cluster, To Do so, enable the log backend, and ensure that
1. logs are stored at /var/log/kubernetes/kubernetes-logs.txt.
2. Log files are retained for 5 days.
3. at maximum, a number of 10 old audit logs files are retained.
Edit and extend the basic policy to log:
1. Cronjobs changes at RequestResponse
2. Log the request body of deployments changes in the namespace kube-system.
3. Log all other resources in core and extensions at the Request level.
4. Don't log watch requests by the "system:kube-proxy" on endpoints or
Answer : A
Show Answer
Hide Answer
Question: 3
You must complete this task on the following cluster/nodes:
Cluster:trace
Master node:master
Worker node:worker1
You can switch the cluster/configuration context using the following command:
[desk@cli] $kubectl config use-context trace
Given: You may use Sysdig or Falco documentation.
Task:
Use detection tools to detect anomalies like processes spawning and executing something weird frequently in the single container belonging to Podtomcat.
Two tools are available to use:
1. falco
2. sysdig
Tools are pre-installed on the worker1 node only.
Analyse the container's behaviour for at least 40 seconds, using filters that detect newly spawning and executing processes.
Store an incident file at/home/cert_masters/report, in the following format:
[timestamp],[uid],[processName]
Note:Make sure to store incident file on the cluster's worker node, don't move it to master node.
Answer : A
Show Answer
Hide Answer
Question: 4
SIMULATION
Create a new ServiceAccount named backend-sa in the existing namespace default, which has the capability to list the pods inside the namespace default.
Create a new Pod named backend-pod in the namespace default, mount the newly created sa backend-sa to the pod, and Verify that the pod is able to list pods.
Ensure that the Pod is running.
A Explanation:
A service account provides an identity for processes that run in a Pod.
When you (a human) access the cluster (for example, usingkubectl), you are authenticated by the apiserver as a particular User Account (currently this is usuallyadmin, unless your cluster administrator has customized your cluster). Processes in containers inside pods can also contact the apiserver. When they do, they are authenticated as a particular Service Account (for example,default).
When you create a pod, if you do not specify a service account, it is automatically assigned thedefaultservice account in the same namespace. If you get the raw json or yaml for a pod you have created (for example,kubectl get pods/ -o yaml), you can see thespec.serviceAccountNamefield has beenautomatically set.
You can access the API from inside a pod using automatically mounted service account credentials, as described inAccessing the Cluster. The API permissions of the service account depend on theauthorization plugin and policyin use.
In version 1.6+, you can opt out of automounting API credentials for a service account by settingautomountServiceAccountToken: falseon the service account:
apiVersion: v1
kind: ServiceAccount
metadata:
name: build-robot
automountServiceAccountToken: false
...
In version 1.6+, you can also opt out of automounting API credentials for a particular pod:
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
serviceAccountName: build-robot
automountServiceAccountToken: false
...
The pod spec takes precedence over the service account if both specify aautomountServiceAccountTokenvalue.
Answer : A
Show Answer
Hide Answer
Question: 5
SIMULATION
Fix all issues via configuration and restart the affected components to ensure the new setting takes effect.
Fix all of the following violations that were found against theAPI server:-
a. Ensure the --authorization-mode argument includes RBAC
b. Ensure the --authorization-mode argument includes Node
c. Ensure that the --profiling argument is set to false
Fix all of the following violations that were found against theKubelet:-
a. Ensure the --anonymous-auth argument is set to false.
b. Ensure that the --authorization-mode argument is set to Webhook.
Fix all of the following violations that were found against theETCD:-
a. Ensure that the --auto-tls argument is not set to true
Hint: Take the use of Tool Kube-Bench
A Explanation:
API server:
Ensure the --authorization-mode argument includes RBAC
Turn on Role Based Access Control.
Role Based Access Control (RBAC) allows fine-grained control over the operations that different entities can perform on different objects in the cluster. It is recommended to use the RBAC authorization mode.
Fix - Buildtime
Kubernetes
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
component: kube-apiserver
tier: control-plane
name: kube-apiserver
namespace: kube-system
spec:
containers:
- command:
+ - kube-apiserver
+ - --authorization-mode=RBAC,Node
image: gcr.io/google_containers/kube-apiserver-amd64:v1.6.0
livenessProbe:
failureThreshold: 8
httpGet:
host: 127.0.0.1
path: /healthz
port: 6443
scheme: HTTPS
initialDelaySeconds: 15
timeoutSeconds: 15
name: kube-apiserver-should-pass
resources:
requests:
cpu: 250m
volumeMounts:
- mountPath: /etc/kubernetes/
name: k8s
readOnly: true
- mountPath: /etc/ssl/certs
name: certs
- mountPath: /etc/pki
name: pki
hostNetwork: true
volumes:
- hostPath:
path: /etc/kubernetes
name: k8s
- hostPath:
path: /etc/ssl/certs
name: certs
- hostPath:
path: /etc/pki
name: pki
Ensure the --authorization-mode argument includes Node
Remediation:Edit the API server pod specification file/etc/kubernetes/manifests/kube-apiserver.yamlon the master node and set the--authorization-modeparameter to a value that includesNode.
--authorization-mode=Node,RBAC
Audit:
/bin/ps -ef | grep kube-apiserver | grep -v grep
Expected result:
'Node,RBAC' has 'Node'
Ensure that the --profiling argument is set to false
Remediation:Edit the API server pod specification file/etc/kubernetes/manifests/kube-apiserver.yamlon the master node and set the below parameter.
--profiling=false
Audit:
/bin/ps -ef | grep kube-apiserver | grep -v grep
Expected result:
'false' is equal to 'false'
Fix all of the following violations that were found against theKubelet:-
Ensure the --anonymous-auth argument is set to false.
Remediation:If using a Kubelet config file, edit the file to set authentication:anonymous: enabled tofalse. If using executable arguments, edit the kubelet service file/etc/systemd/system/kubelet.service.d/10-kubeadm.confon each worker node and set the below parameter inKUBELET_SYSTEM_PODS_ARGSvariable.
--anonymous-auth=false
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
Audit:
/bin/ps -fC kubelet
Audit Config:
/bin/cat /var/lib/kubelet/config.yaml
Expected result:
'false' is equal to 'false'
2) Ensure that the --authorization-mode argument is set to Webhook.
Audit
docker inspect kubelet | jq -e '.[0].Args[] | match('--authorization-mode=Webhook').string'
Returned Value:--authorization-mode=Webhook
Fix all of the following violations that were found against theETCD:-
a. Ensure that the --auto-tls argument is not set to true
Do not use self-signed certificates for TLS. etcd is a highly-available key value store used by Kubernetes deployments for persistent storage of all of its REST API objects. These objects are sensitive in nature and should not be available to unauthenticated clients. You should enable the client authentication via valid certificates to secure the access to the etcd service.
Fix - Buildtime
Kubernetes
apiVersion: v1
kind: Pod
metadata:
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
creationTimestamp: null
labels:
component: etcd
tier: control-plane
name: etcd
namespace: kube-system
spec:
containers:
- command:
+ - etcd
+ - --auto-tls=true
image: k8s.gcr.io/etcd-amd64:3.2.18
imagePullPolicy: IfNotPresent
livenessProbe:
exec:
command:
- /bin/sh
- -ec
- ETCDCTL_API=3 etcdctl --endpoints=https://[192.168.22.9]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt
--cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt --key=/etc/kubernetes/pki/etcd/healthcheck-client.key
get foo
failureThreshold: 8
initialDelaySeconds: 15
timeoutSeconds: 15
name: etcd-should-fail
resources: {}
volumeMounts:
- mountPath: /var/lib/etcd
name: etcd-data
- mountPath: /etc/kubernetes/pki/etcd
name: etcd-certs
hostNetwork: true
priorityClassName: system-cluster-critical
volumes:
- hostPath:
path: /var/lib/etcd
type: DirectoryOrCreate
name: etcd-data
- hostPath:
path: /etc/kubernetes/pki/etcd
type: DirectoryOrCreate
name: etcd-certs
status: {}
Answer : A
Show Answer
Hide Answer