Mock Exam 1

  • Question 1

  • Create a Pod mc-pod in the mc-namespace namespace with three containers. The first container should be named mc-pod-1, run the nginx:1-alpine image, and set an environment variable NODE_NAME to the node name. The second container should be named mc-pod-2, run the busybox:1 image, and continuously log the output of the date command to the file /var/log/shared/date.log every second. The third container should have the name mc-pod-3, run the image busybox:1, and print the contents of the date.log file generated by the second container to stdout. Use a shared, non-persistent volume.

  • For the containers listed below, they need access to the same filesystem. Need to set up a volume.

Can check in the Documentation for Pod and it will give you an example configuration

https://kubernetes.io/docs/concepts/workloads/pods/

Copy this example yaml:

apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  containers:
  - name: nginx
    image: nginx:1.14.2
    ports:
    - containerPort: 80

Can also do kubectl run mc-pod --image=nginx1-alpine --dry-run=client -o yaml > question1.yaml

Then by opening the question1.yaml file, we have a base configuration like in the above.

The question1.yaml would look like after further configuration:

apiVersion: v1
kind: Pod
metadata: 
   name: mc-pod
spec:
  volumes: 
    - name: shared-volume
      emptyDir: {}
  containers: 
  - image: nginx:1-alpine
    name: mc-pod-1
    env: 
      - name: NODE_NAME
        valueFrom: 
          fieldRef:
            fieldPath: spec.nodeName
  - name: mc-pod-2
    image: busybox:1
    command: 
      - "sh"
      - "-c"
      - "while true; do date >> /var/log/shared/date.log; sleep 1; done"
    volumeMounts:
       - name: shared-volume
         mountPath: /var/log/shared
  - name: mc-pod-3
    image: busybox:1
    command: 
      - "sh"
      - "-c"
      - "tail -f /var/log/shared/date.log"
    volumeMounts:
       - name: shared-volume
         mountPath: /var/log/shared 
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}
  • Then we apply the above with kubectl apply -f question1.yaml

  • Need to say which pod to grab the logs for:

kubectl logs mc-pod -c mc-pod-3 -f
  • Now we can see the logs from mc-pod-2.

  • Question 2 - install a container runtime.

  • As an administrator, you need to prepare node01 to install kubernetes. One of the steps is installing a container runtime. Install the cri-docker_0.3.16.3-0.debian.deb package located in /root and ensure that the cri-docker service is running and enabled to start on boot.

  • This question needs to be solved on node node01. To access the node using SSH, use the credentials below:

username: bob password: caleston123

  • Install the CRI docker package with dpkg -i <package_name>

  • Then we start the systemd service with:

systemctl start cri-docker
  • To check if it is enabled, run the following:
systemctl is-enabled cri-docker
  • Question 3

  • On controlplane node, identify all CRDs related to VerticalPodAutoscaler and save their names into the file /root/vpa-crds.txt.

  • To see all CRDs on the system, run kubectl get crd

  • Then place the output into a file, in this case /root/vpa-creds.txt

  • Question 4

  • Create a service messaging-service to expose the messaging application within the cluster on port 6379.

  • Run kubectl get pods.

  • Then kubectl expose pod <pod_name> --port=6379 --name=messaging-service

  • ClusterIP is the default type of service (only allows communications within the cluster), therefore no need to specify it in the above command.

  • To check the IP of the service, run kubectl get pod -o wide

  • Then to better describe the service kubectl describe service messaging-service.

Use imperative commands.

  • Question 5

  • Create a deployment named hr-web-app using the image kodekloud/webapp-colour with 2 replicas.

  • There is the Deployment documentation in the docs. Else can do it imperatively. Can do this command instead:

kubectl create deployment hr-web-app --image=kodekloud/webapp-colour --replicas=2
  • Check with kubectl get deployment

  • Question 6

  • A new application orange is deployed. There is something wrong with it. Identify and fix the issue.

  • Have a quick check with kubectl get pod

  • We can see this is an init container and it is crashing.

  • Then kubectl describe pod and you can see that it is in Pending state.

    • Under Init Containers, go down to State and you can see that the State is in Terminated, with a Reason of Error and Exit Code of 127

    • For the container to run, we need the init container to complete successfully.

    • Need to figure out why the init container is not running successfully.

    • From the kubectl describe pod, we see that the command line has sleeeeep written. Can confirm in the logs with:

    kubectl logs <pod_name> -c <container_name>  
    
  • Before running the fix, make sure the pod is not part of a replicaset or deployment.

    • kubectl get rs

    • kubectl get deploy

  • Then we pipe the yaml output of the pod to a file:

kubectl get pod <pod_name> -o yaml > <file.yaml>
  • Open up the file.yaml and fix the command line typo.

  • Then run a kubectl replace -f <file.yaml> --force

    • Using the above, it replaces the old pod and replaces it with the above new file.
  • Question 7

  • Expose the hr-web-app created in the previous task as a service named hr-web-app-service, accessible on port 30082 on the nodes of the cluster.

  • The web application listens on port 8080.

  • We need a NodePort service.

  • Now we expose a deployment:

kubectl expose deployment <deployment_name> --type=NodePort --port=8080 --name=hr-web-app-service --dry-run=client -o yaml <file.yaml>
  • Edit the <file.yaml>

  • Then change the NodePort in the file to look like this:

apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  labels:
    app: hr-web-app
  name: hr-web-app-service
spec:
  ports: 
  - port: 8080
    protocol: TCP
    targetPort: 8080
    nodePort: 30082 
  selector:
    app: hr-web-app
  type: NodePort
status
  loadBalancer: {}
  • Then run kubectl apply -f <file.yaml>

  • Check with kubectl describe service hr-web-app-service, under Endpoints you will see two IP entries, as we have two pods in this case.

  • Question 8

  • Create a Persistent Volume with the given specification: -

Volume name: pv-analytics

Storage: 100Mi

Access mode: ReadWriteMany

Host path: /pv/data-analytics

  • If you don’t remember how to create a persistent volume off the top of your head, use the Kubernetes Documentation during the exam.

https://kubernetes.io/docs/concepts/storage/persistent-volumes/

  • Grab the example config:
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv0003
spec:
  capacity:
    storage: 5Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  storageClassName: slow
  mountOptions:
    - hard
    - nfsvers=4.1
  nfs:
    path: /tmp
    server: 172.17.0.2
  • We change the file to the following:
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-analytics
spec:
  capacity:
    storage: 100Mi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteMany
  hostPath: 
    path: /pv/data-analytics
  • Then apply the file:
kubectl apply -f <file.yaml>
  • Check the PV was created:
kubectl get pv
  • Question 9

  • Create a Horizontal Pod Autoscaler (HPA) with name webapp-hpa for the deployment named kkapp-deploy in the default namespace with the webapp-hpa.yaml file located under the root folder. Ensure that the HPA scales the deployment based on CPU utilisation, maintaining an average CPU usage of 50% across all pods. Configure the HPA to cautiously scale down pods by setting a stabilisation window of 300 seconds to prevent rapid fluctuations in pod count.

Note: The kkapp-deploy deployment is created for backend; you can check in the terminal.

  • Example horizontal autoscaler yaml:
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: webapp-hpa
  namespace: default
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: kkapp-deploy
  minReplicas: 2
  maxReplicas: 10
  • Two useful pieces of documentation for this:

https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/ https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/

  • We make changes to the above file:
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: webapp-hpa
  namespace: default
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: kkapp-deploy
  minReplicas: 2
  maxReplicas: 10
  metrics:
    - type: Resource
      resource: cpu
      target: 
        type: Utilization
        averageUtilization: 50
  behavior:
    scaleDown:
      stabilizationWindowSeconds: 300 
  • Then we apply the file:
kubectl apply -f <file.yaml>
  • Check the Horizontal Pod Autoscalers with this command:
kubectl get hap
  • Question 10

  • Deploy a Vertical Pod Autoscaler (VPA) with name analytics-vpa for the deployment named analytics-deployment in the default namespace.
  • The VPA should automatically adjust the CPU and memory requests of the pods to optimise resource utilisation. Ensure that the VPA operates in Auto mode, allowing it to evict and recreate pods with updated resource requests as needed.

  • Create the yaml file:
apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metdata: 
  name: analytics-vpa
  namespace: default
spec:
  targetRef: 
    apiVersion: apps/v1
    kind: Deployment
    name: analytics-deployment
  updatePolicy: 
    updateMode: "Auto"
  • Then run kubectl apply -f <file.yaml>

  • We can check the Vertical Pod Autoscaler with the following:

kubectl get vpa
  • Question 11

  • Create a Kubernetes Gateway resource with the following specifications:

    Name: web-gateway Namespace: nginx-gateway Gateway Class Name: nginx Listeners: Protocol: HTTP Port: 80 Name: http

  • Use this documentation:

https://kubernetes.io/docs/concepts/services-networking/gateway/

  • We use this configuration from the above documentation:
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
  name: example-gateway
spec:
  gatewayClassName: example-class
  listeners:
  - name: http
    protocol: HTTP
    port: 80
  • Then we create a new file with the above config and update the values as the following:
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
  name: web-gateway
  namespace: nginx-gateway
spec:
  gatewayClassName: nginx
  listeners:
  - name: http
    protocol: HTTP
    port: 80
  • kubectl apply -f <file.yaml>

  • Then we can find it by one of the following commands:

kubectl get gateway
kubectl get gateway -n nginx-gateway
  • Question 12

  • One co-worker deployed an nginx helm chart kk-mock1 in the kk-ns namespace on the cluster. A new update is pushed to the helm chart, and the team wants you to update the helm repository to fetch the new changes.

After updating the helm chart, upgrade the helm chart version to 18.1.15.

  • Quickly check the helm charts with the following:
helm list
  • Can also find helm charts that are in certain namespaces:
helm list -n kk-ns
  • Need to update the repositories to grab the latest version:
helm repo update
  • Search the repositories:
helm search repo nginx
  • Can find available versions with:
helm serach repo nginx --versions
  • Shows each of the versions for each of the charts.

  • Then do an upgrade:

helm upgrade kk-mock1 kk-mock1/nginx --version 18.1.5 -n kk-ns 
  • Running helm list again then should show the version has been upgraded.

Updated: