본문 바로가기

네트워크/k8s

[k8s] CKA 대비 문제 및 풀이

Lightning Lab & MOCK EXAM

Ligtning Lab

1. Upgrade the current version of kubernetes from 1.18 to 1.19.0 exactly using the kubeadm utility. Make sure that the upgrade is carried out one node at a time starting with the master node. To minimize downtime, the deployment gold-nginx should be rescheduled on an alternate node before upgrading each node.

Upgrade master/controlplane node first. Drain node01 before upgrading it. Pods for gold-nginx should run on the master/controlplane node subsequently.

https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/

위 링크보면서 츠안찬히 따라하기
순서는 마스터노드 --> 워커노드순
워커노드 접속할 일 있으면 ssh node01로 ㄱㄱ
워커노드 drain 할 땐 다시 마스터 노드로

2. Print the names of all deployments in the admin2406 namespace in the following format:

DEPLOYMENT CONTAINER_IMAGE READY_REPLICAS NAMESPACE

<deployment name> <container image used> <ready replica count> <Namespace>

The data should be sorted by the increasing order of the deployment name.

Example:
DEPLOYMENT CONTAINER_IMAGE READY_REPLICAS NAMESPACE
deploy0 nginx:alpine 1 admin2406
Write the result to the file /opt/admin2406_data.

Hint: Make use of -o custom-columns and --sort-by to print the data in the required format.

https://kubernetes.io/docs/reference/kubectl/cheatsheet/

kubectl get deployments.apps -n admin2406 -o=custom-columns='DEPLOYMENT:metadata.name','CONTAINER_IMAGE:spec.template.spec.containers[*].image','READY_REPLICAS:spec.replicas','NAMESPACE:metadata.namespace' --sort-by metadata.name  > /opt/admin2406_data

3. A kubeconfig file called admin.kubeconfig has been created in /root/CKA. There is something wrong with the configuration. Troubleshoot and fix it.

kubectl cluster-info --kubeconfig=/root/CKA/admin.kubeconfig
vi /root/CKA/admin.kubeconfig

뒤에 포트 고침

4. Create a new deployment called nginx-deploy, with image nginx:1.16 and 1 replica. Next upgrade the deployment to version 1.17 using rolling update. Make sure that the version upgrade is recorded in the resource annotation.

https://kubernetes.io/docs/concepts/workloads/controllers/deployment/

vi nginx-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deploy
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.16

업데이트

kubectl set image deployment/nginx-deploy nginx=nginx:1.17 --record

확인

kubectl rollout history deployment nignx-deploy

5. A new deployment called alpha-mysql has been deployed in the alpha namespace. However, the pods are not running. Troubleshoot and fix the issue. The deployment should make use of the persistent volume alpha-pv to be mounted at /var/lib/mysql and should use the environment variable MYSQL_ALLOW_EMPTY_PASSWORD=1 to make use of an empty root password.

Important: Do not alter the persistent volume.

https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/

kubectl get deploy alpha-mysql -n alpha
kubectl describe deploy alpha-mysql -n alpha
kubectl get deploy alpha-mysql -o yaml > deploy.yaml -n alpha
vi deploy.yaml

Persistent Volume Claim 이름을 alpha-claim으로 수정

kubectl get pvc -n alpha
kubectl get pv -n alpha

storageclass 생성

vi storage.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: slow
  namespace: alpha
provisioner: kubernetes.io/gce-pd
parameters:
  type: pd-standard
kubectl apply -f storage.yaml
kubectl get pvc alpha-claim -n alpha -o yaml > pvc.yaml
kubectl delete pvc alpha-claim -n alpha
vi pvc.yaml

수정

ReadWriteMany --> ReadWriteOnce
storage-slow --> slow
request 2Gi --> 1Gi

kubectl apply -f pvc.yaml

6. Take the backup of ETCD at the location /opt/etcd-backup.db on the master node

ETCDCTL_API=3 etcdctl --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key snapshot save /opt/etcd-backup.db

7. Create a pod called secret-1401 in the admin1401 namespace using the busybox image. The container within the pod should be called secret-admin and should sleep for 4800 seconds.

The container should mount a read-only secret volume called secret-volume at the path /etc/secret-volume. The secret being mounted has already been created for you and is called dotfile-secret.

https://kubernetes.io/docs/concepts/configuration/secret/

kubectl run secret-1401 --image=busybox --namespace=admin1401 --dry-run=client -o yaml > pod.yaml
vi pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: secert-1401
  labels:
    run: secret-1401
spec:
  containers:
  - name: secret-admin
    image: busybox
    command: ["sleep", "4800"]
    volumeMounts:
    - name: secret-volume
      mountPath: "/etc/secret-volume"
  volumes:
  - name: secret-volume
    secret:
      secretName: dotfile-secret
kubectl apply -f pod.yaml

MOCK EXAM 1

1. Deploy a pod named nginx-pod using the nginx:alpine image.

kubectl run nginx-pod --image=nginx:alpine

2. Deploy a messaging pod using the redis:alpine image with the labels set to tier=msg.

kubectl run messaging --image=redis:alpine --labels=tier=msg

3. Create a namespace named apx-x9984574

kubectl create ns apx-x9984574

4. Get the list of nodes in JSON format and store it in a file at /opt/outputs/nodes-z3444kd9.json

kubectl get nodes -o json > /opt/outputs/nodes-z3444kd9.json

5. Create a service messaging-service to expose the messaging application within the cluster on port 6379.

kubectl expose pod messaging --name=messaging-service --port=6379

6. Create a deployment named hr-web-app using the image kodekloud/webapp-color with 2 replicas

kubectl create deployment hr-web-app --image=kodekloud/webapp-color --replicas=2

7. Create a static pod named static-busybox on the master node that uses the busybox image and the command sleep 1000

kubectl run static-busybox --image=busybox --dry-run=client -o yaml > static-busybox.yaml
vi static-busybox.yaml

command: ["sleep", "1000"] 추가

apiVersion: v1
kind: Pod
metadata:
  name: static-busybox
spec:
  containers:
  - name: static-busybox
    image: busybox
    command: ["sleep", "1000"]
mv static_busybox.yaml /etc/kubernetes/manifests

8. Create a POD in the finance namespace named temp-bus with the image redis:alpine.

kubectl run temp-bus --image=redis:alpine --namespace=finance

9. A new application orange is deployed. There is something wrong with it. Identify and fix the issue.

kubectl get pod orange -o yaml > orange.yaml
kubectl delete pod orange
vi orange.yaml

sleeeepsleep으로 수정

kubectl apply -f orange.yaml

10. Expose the hr-web-app as service hr-web-app-service application on port 30082 on the nodes on the cluster

The web application listens on port 8080

kubectl expose deployment hr-web-app --name=hr-web-app-service --port=8080 --dry-run=client -o yaml > hr-web-app-service.yaml
vi hr-web-app-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: hr-web-app-service
  labels:
    app: hr-web-app
spec:
  type: NodePort
  selector:
    app: hr-web-app
  ports:
    - port: 8080
      nodePort: 30082
kubectl apply -f hr-web-app-service.yaml

11. Use JSON PATH query to retrieve the osImages of all the nodes and store it in a file /opt/outputs/nodes_os_x43kj56.txt

The osImages are under the nodeInfo section under status of each node.

https://kubernetes.io/docs/reference/kubectl/jsonpath/

kubectl get nodes -o jsonpath='{.items[*].status.nodeInfo.osImage}' > /opt/outputs/nodes_os_x43kj56.txt

12. Create a Persistent Volume with the given specification

Volume Name: pv-analytics, Storage: 100Mi, Access modes: ReadWriteMany, Host Path: /pv/data-analytics

https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/

vi pv-analytics.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-analytics
spec:
  capacity:
    storage: 100Mi
  accessModes:
    - ReadWriteMany
  hostPath:
    path: "/pv/data-analytics"
kubectl apply -f pv-analytics.yaml

MOCK EXAM 2

1. Take a backup of the etcd cluster and save it to /opt/etcd-backup.db

https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/#built-in-snapshot

cd /etc/kubernets/manifests
cat etcd.yaml

ca.crt, server.crt, server.key경로 보고 다음 명령어 실행

ETCDCTL_API=3 etcdctl --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key snapshot save /opt/etcd-backup.db

2. Create a Pod called redis-storage with image: redis:alpine with a Volume of type emptyDir that lasts for the life of the Pod. Specs on the right.

https://kubernetes.io/docs/concepts/storage/volumes/#emptydir-configuration-example

kubectl run redis-storage --image=redis:alpine --dry-run=client -o yaml > pod.yaml
vi pod.yaml
kubectl apply -f pod.yaml

3. Create a new pod called super-user-pod with image busybox:1.28. Allow the pod to be able to set system_time. sleep 4800.

https://kubernetes.io/docs/tasks/configure-pod-container/security-context/

kubectl run super-user-pod --image=busybox:1.28 --dry-run=client -o yaml > super.yaml
apiVersion: v1
kind: Pod
metadata:
  name: super-user-pod
spec:
  containers:
  - name: super-user-pod
    image: busybox:1.28
    command: ["sleep", "4800"]
    securityContext:
      capabilities:
        add: ["SYS_TIME"]
kubectl apply -f super.yaml

4. A pod definition file is created at /root/CKA/use-pv.yaml. Make use of this manifest file and mount the persistent volume called pv-1. Ensure the pod is running and the PV is bound. mountPath: /data persistentVolumeClaim Name: my-pvc

https://kubernetes.io/ko/docs/tasks/configure-pod-container/configure-persistent-volume-storage/

vi pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Mi
kubectl apply -f pvc.yaml
vi use-pv.yaml
apiVersion: v1
kind: Pod
metadata:
  name: use-pv
spec:
  volumes:
    - name: my-pvc
      persistentVolumeClaim:
        claimName: my-pvc
  containers:
    - name: use-pv
      image: nginx
      volumeMounts:
        - mountPath: "/data"
          name: my-pvc
kubectl apply -f use-pv.yaml

5. Create a new deployment called nginx-deploy, with image nginx:1.16 and 1 replica. Record the version. Next upgrade the deployment to version 1.17 using rolling update. Make sure that the version upgrade is recorded in the resource annotation.

https://kubernetes.io/docs/concepts/workloads/controllers/deployment/

vi nginx-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deploy
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.16

업데이트

kubectl set image deployment/nginx-deploy nginx=nginx:1.17 --record

확인

kubectl rollout history deployment nignx-deploy

6. Create a new user called john. Grant him access to the cluster. John should have permission to create, list, get, update and delete pods in the development namespace. The private key exists in the location: /root/CKA/john.key and csr at /root/CKA/john.csr

Important Note: As of kubernetes 1.19, the CertificateSigningRequest object expects a signerName.

Please refer documentation below to see the example usage:

https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/#create-certificate-request-kubernetes-object

https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/

vi john.yaml
cat <<EOF > john.yaml
apiVersion: certificates.k8s.io/v1
kind: CertificateSigningRequest
metadata:
  name: john-developer
spec:
  request: $(cat john.csr | base64 | tr -d '\n')
  signerName: kubernetes.io/kubelet-serving
  usages:
  - digital signature
  - key encipherment
  - server auth
EOF
kubectl apply -f john.yaml

CSR 생성 확인

kubectl get csr

pending 상태를 approved상태로 바꾸기

kubectl certificate approve john-developer
kubectl create role developer --resource=pods --verb=create,list,get,update,delete --namespace=development 
kubectl create rolebinding developer-role-binding --role=developer --user=john --namespace=development

확인

kubectl -n development describe rolebindings.rbac.authorization.k8s.io developer-role-binding
kubectl auth can-i update pods --namespace=development --as=john

 

7. Create an nginx pod called nginx-resolver using image nginx, expose it internally with a service called nginx-resolver-service. Test that you are able to look up the service and pod names from within the cluster. Use the image: busybox:1.28 for dns lookup. Record results in /root/CKA/nginx.svc and /root/CKA/nginx.pod

https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/
https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#what-things-get-dns-names

kubectl run nginx-resolver --image=nginx
kubectl expose pod nginx-resolver --name=nginx-resolver-service --port=80 --target-port=80 --type=ClusterIP

테스트

kubectl run test-nslookup --image=busybox:1.28 --rm -it -- nslookup nginx-resolver-service > /root/nginx.svc

ip 복사

kubectl get pod nginx-resolver -o wide
kubectl run test-nslookup --image=busybox:1.28 --rm -it -- nslookup 10-32-0-5.default.pod > /root/nginx.pod

8. Create a static pod on node01 called nginx-critical with image nginx. Create this pod on node01 and make sure that it is recreated/restarted automatically in case of a failure.

Use /etc/kubernetes/manifests as the Static Pod path for example.

kubectl get nodes
ssh node01

kubelet config 파일 경로 확인

systemctl status kubelet
cat /var/lib/kubelet/config.yaml | grep staticPodPath
cd /etc/kubernetes
mkdir manifests
logout
kubectl run nginx-critical --image=nginx --dry-run=client -o yaml > nginx-critical.yaml
cat >> nginx-critical.yaml

내용 복사 후

ssh node01
cd /etc/kubernetes/manifests
vi nginx-critical.yaml

복붙
apply는 staticPod이니 안 해도 됨

확인

logout
kubectl get pods

MOCK EXAM 3

1. Create a new service account with the name pvviewer. Grant this Service account access to list all PersistentVolumes in the cluster by creating an appropriate cluster role called pvviewer-role and ClusterRoleBinding called pvviewer-role-binding.

Next, create a pod called pvviewer with the image: redis and serviceAccount: pvviewer in the default namespace

https://kubernetes.io/docs/reference/access-authn-authz/rbac/
https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/

kubectl create serviceaccount pvviewer
kubectl create clusterrole pvviewer-role --resource=persistentvolumes --verb=list
kubectl create clusterrolebinding pvviewer-role-binding --clusterrole=pvviewer-role --serviceaccount=default:pvviewer
kubectl run pvviewer --image=redis --dry-run=client -o yaml > pod.yaml
vi pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: pvviewer
spec:
  containers:
  - image: redis
    name: pvviewer
  serviceAccountName: pvviewer
kubectl apply -f pod.yaml

2. List the InternalIP of all nodes of the cluster. Save the result to a file /root/CKA/node_ips

Answer should be in the format: InternalIP of master<space>InternalIP of node1<space>InternalIP of node2<space>InternalIP of node3 (in a single line)

https://kubernetes.io/docs/reference/kubectl/cheatsheet/

# Get ExternalIPs of all nodes
kubectl get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=="ExternalIP")].address}'

InternalIP로 변경

kubectl get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=="InternalIP")].address}' > /root/node_ips

3. Create a pod called multi-pod with two containers.

Container 1, name: alpha, image: nginx

Container 2: beta, image: busybox, command sleep 4800.

Environment Variables:

container 1:
name: alpha

Container 2:
name: beta

https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/

kubectl run alpha --image=nginx --dry-run=client -o yaml > multi-pod.yaml
vi multi-pod.yaml

metadata의 name을 multi-pod로 수정
container 밑에 추가

containers:
  - image: nginx
    name: alpha
    env:
    - name: name
      value: alpha
  - image: busybox
    name: beta
    command: ["sleep", "4800"]
    env:
    - name: name
      value: beta

4. Create a Pod called non-root-pod , image: redis:alpine runAsUser: 1000 fsGroup: 2000

https://kubernetes.io/docs/tasks/configure-pod-container/security-context/

kubectl run non-root-pod --image=redis:alpine --dry-run=client -o yaml > pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: non-root-pod
spec:
  securityContext:
    runAsUser: 1000
    fsGroup: 2000
  containers:
  - name: non-root-pod
    image: redis:alpine
    securityContext:
      allowPrivilegeEscalation: false

5. We have deployed a new pod called np-test-1 and a service called np-test-service. Incoming connections to this service are not working. Troubleshoot and fix it. Create NetworkPolicy, by the name ingress-to-nptest that allows incoming connections to the service over port 80

https://kubernetes.io/docs/concepts/services-networking/network-policies/

kubectl describe svc np-test-service
kubectl get netpol
kubectl describe netpol default-deny

정책이 그냥 모든 인그레스 트래픽 거부임

새로 만들어서 port 80 허용하는 network policy 만들면 됨

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: ingress-to-nptest
  namespace: default
spec:
  podSelector:
    matchLabels:
      run: np-test-1
  ingress:
    ports:
    -  protocol: TCP
       port: 80
  policyTypes:
  - Ingress

연결 테스트 방법

kubectl run test-np --image=busybox:1.28 --rm -it -- sh
nc -z -v -w 2 np-test-service 80 

nc: netcat
-z: 스캔 시 사용, 연결되면 바로 닫는 용도
-v: verbose
-w secs: secs 시간 후 타임아웃

6. Taint the worker node node01 to be Unschedulable. Once done, create a pod called dev-redis, image redis:alpine to ensure workloads are not scheduled to this worker node. Finally, create a new pod called prod-redis and image redis:alpine with toleration to be scheduled on node01.

https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/

key:env_type, value:production, operator: Equal and effect:NoSchedule

kubectl taint node node01 env_type=production:NoSchedule
kubectl run dev-redis --image=redis:alpine

node01에 할당되지 않은 것 확인

kubectl get pods -o wide

toleration pod 생성

kubectl run prod-redis --image=redis:alpine --dry-run=client -o yaml > pod.yaml
vi pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: prod-redis
spec:
  containers:
  - name: prod-redis
    image: redis:alpine
  tolerations:
  - key: "env_typey"
    operator: "Equal"
    value: "production"
    effect: "NoSchedule"

생성 및 확인

kubectl apply -f pod.yaml
kubectl get pods -o wide

7. Create a pod called hr-pod in hr namespace belonging to the production environment and frontend tier .

image: redis:alpine

Use appropriate labels and create all the required objects if it does not exist in the system already.

kubectl create ns hr
kubectl run hr-pod  --image=redis:alpine --labels=environment=production,tier=frontend --namespace=hr

확인

kubectl -n hr get pods --show-labels

8. A kubeconfig file called super.kubeconfig has been created under /root/CKA. There is something wrong with the configuration. Troubleshoot and fix it.

kubectl cluster-info --kubeconfig=/root/CKA/super.kubeconfig
vi /root/CKA/super.kubeconfig

포트에 문제, 9999를 6443으로 변경

9. We have created a new deployment called nginx-deploy. scale the deployment to 3 replicas. Has the replica's increased? Troubleshoot the issue and fix it.

https://kubernetes.io/docs/reference/kubectl/cheatsheet/

kubectl scale deployment nginx-deploy --replicas=3
kubectl describe deployment.app nginx-deploy
kubectl -n kube-system get pods

kube-controller-manager-master가 문제

cd /etc/kubernetes/manifests
vi kube-controller-manager.yaml

contro1ler로 되어있는걸 controller로 고치자

sed -i 's/kube-contro1ler-manager/kube-controller-manager/g' kube-controller-manager.yaml

출처

CKA with practice test


결과는???

 

첫 번째 시험은 Fail

두 번째 시험에서 성공!

 

 

'네트워크 > k8s' 카테고리의 다른 글

[CKA] 준비하기 Step 2. 실습 환경세팅하기  (0) 2022.11.16
[CKA] 준비하기 Step 1. 시험신청, udemy 강의  (0) 2022.11.16
[k8s] 44. Logging  (0) 2021.02.26
[k8s] 43. Storage  (0) 2021.02.26
[k8s] 42. Networking  (0) 2021.02.26