현상

- kubespray 1.22에서 1.23으로 업그레이드 진행 시 아래와 같은 에러 발생

TASK [network_plugin/calico : Check if inventory match current cluster configuration] ****************************************************************
fatal: [k8s-master01]: FAILED! => {
    "assertion": "not calico_pool_conf.spec.ipipMode is defined or calico_pool_conf.spec.ipipMode == calico_ipip_mode",
    "changed": false,
    "evaluated_to": false,
    "msg": "Your inventory doesn't match the current cluster configuration"
}

 

 

원인

- calico 모드 기본 값이 ipip에서 vxlan으로 변경

 

 

 

해결 방법

(1) IP in IP 모드로 사용할 경우

calico_ipip_mode: 'Always'  # Possible values is `Always`, `CrossSubnet`, `Never`
calico_vxlan_mode: 'Never'
calico_network_backend: 'bird'

 

(2) BGP 모드

- BGP 모드에서 encapsulation을 하지 않음

calico_ipip_mode: 'Never'
calico_vxlan_mode: 'Never'
calico_network_backend: 'bird'

 

(3) ipip 모드에서 vxlan 모드로 변경

# All control-plane Nodes
cd /usr/local/bin

calicoctl.sh patch felixconfig default -p '{"spec":{"vxlanEnabled":true}}'
calicoctl.sh patch ippool default-pool -p '{"spec":{"ipipMode":"Never", "vxlanMode":"Always"}}'
calicoctl.sh patch felixconfig default -p '{"spec":{"ipipEnabled":false}}'

# upgrade

 

 

 

Calico 모드 확인 방법

# IPIP mode
root@k8s-master01:~# calicoctl get ippool -o wide
NAME           CIDR             NAT    IPIPMODE   VXLANMODE   DISABLED   SELECTOR
default-pool   10.233.64.0/18   true   Always     Never       false      all()


# VXLAN mode
root@sung-ubuntu01:~# calicoctl get ippool -o wide
NAME           CIDR             NAT    IPIPMODE   VXLANMODE   DISABLED   SELECTOR
default-pool   10.233.64.0/18   true   Never      Always      false      all()

 

'Kubernetes' 카테고리의 다른 글

Ceph-CSI 구성  (0) 2023.02.10
OS 옵션 변경  (0) 2022.12.26
[LoadBalancer] MetalLB  (0) 2022.10.28
[K8s] grafana Alerm list  (0) 2022.10.24
[rook-ceph] 스토리지 배포  (0) 2022.10.18

베어메탈 쿠버네티스 환경에서도 별도의 하드웨어 장비 없이 로드밸런서 타입의 서비스를 할 수 있도록 고안된 오픈소스 프로젝트이다. 퍼블릭 클라우드 공급자의 로드 밸런서를 사용하여 서비스를 외부에 노출시킬 수 있지만 온프레미스 환경에서의 제약사항을 해결해 준다.

 

LoadBalancer란?

특정 서버(또는 프로세스)에 가해지는 트래픽을 여러 대의 서버에 분산 시켜주는 장치(또는 프로그램)으로 솔루션에 따라 여러 분산 알고리즘을 제공한다.

 

MetalLB 프로젝트

2021년 CNCF의 Orchestration & Management, Service Proxy 항목으로 인정된 프로젝트로 구조가 복잡하지 않으며 사용이 편리하다. 온프레미스 쿠버네티스 운영자가 원하는 기능을 명확히 구현해준다.

 

요구사항

  • Kubernetes 1.13.0 이상의 클러스터
  • Antrea, Canal, Cilium, Flannel, Kube-ovn CNI에서는 완벽히 사용가능 하며 Calico, Kube-router, Weave Net CNI 사용 시 사용시 일부 안정성 이슈 있다.
  • MetalLB에 할당 할 수 있는 IPv4 주소 필요
  • BGP 모드를 사용 할 경우, 하나 이상의 BGP 지원 라우터 필요
  • L2 모드를 사용할 경우, 하시코프의 memberlist* 요구사항 대로 노드 간 TCP&UDP 7946 포트를 오픈

*https://github.com/hashicorp/memberlist

 

기능

주소 할당

클라우드 제공자의 쿠버네티스 클러스터에서 로드 밸런서를 요청하면 클라우드 플랫폼이 IP 주소를 할당한다. MetlLB는 외부에서 IP 주소를 생성할 수 없으므로 사용할 수 있는 IP 주소 풀을 사용자가 제공해야 한다.

 

외부 발표

MetalLB가 서비스에 외부 IP주소를 할당한 후에는 클러스터 외부의 네트워크에서 해당 IP가 클러스터에 존재한다는 것을 인식하도록 해야한다. MetalLB는 사용되는 모드(ARP, NDP 또는 BGP)에 따라서 표준 네트워킹 또는 라우팅 프로토콜을 사용하여 외부 네트워크에 IP를 노출시킨다.

  • Layer 2 Mode (ARP/NDP)
    • Layer 2(데이터 링크) 모드에서는 클러스터의 한 시스템이 서비스의 소유권을 갖고 표준 주소 검색 프로토콜(IPv4일 경우는 ARP, IPv6일 경우 NDP)으로 해당 IP를 로컬 네트워크에서 연결 할 수 있도록 합니다.
  • BGP
    • BGP 모드에서 클러스터의 모든 노드는 BGP 사용자가 제어하는 근처 라우터와 피어링 세션을 만들고 해당 라우터에 트래픽을 서비스 IP로 전달하는 방법을 알려줍니다.
    • BGP를 사용하면 여러 노드에 걸친 로드 밸런싱과 BGP 정책 메커니즘 덕분에 세분화된 트래픽 제어가 가능합니다.

 

 

설치 및 사용

준비 사항

kube-proxy IPVS 모드를 사용하는 경우 StrictARP 모드를 활성화 한다.

root@k8s-master01:~# kubectl -n kube-system edit cm kube-proxy
[strictARP: true]로 변경

 

설치

root@k8s-master01:~# kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.7/config/manifests/metallb-native.yaml
namespace/metallb-system created
customresourcedefinition.apiextensions.k8s.io/addresspools.metallb.io created
customresourcedefinition.apiextensions.k8s.io/bfdprofiles.metallb.io created
customresourcedefinition.apiextensions.k8s.io/bgpadvertisements.metallb.io created
customresourcedefinition.apiextensions.k8s.io/bgppeers.metallb.io created
customresourcedefinition.apiextensions.k8s.io/communities.metallb.io created
customresourcedefinition.apiextensions.k8s.io/ipaddresspools.metallb.io created
customresourcedefinition.apiextensions.k8s.io/l2advertisements.metallb.io created
serviceaccount/controller created
serviceaccount/speaker created
role.rbac.authorization.k8s.io/controller created
role.rbac.authorization.k8s.io/pod-lister created
clusterrole.rbac.authorization.k8s.io/metallb-system:controller created
clusterrole.rbac.authorization.k8s.io/metallb-system:speaker created
rolebinding.rbac.authorization.k8s.io/controller created
rolebinding.rbac.authorization.k8s.io/pod-lister created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:controller created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:speaker created
secret/webhook-server-cert created
service/webhook-service created
deployment.apps/controller created
daemonset.apps/speaker created
validatingwebhookconfiguration.admissionregistration.k8s.io/metallb-webhook-configuration created

 

설치 확인

root@k8s-master01:~# kubectl -n metallb-system get all
NAME                              READY   STATUS    RESTARTS   AGE
pod/controller-6c58495cbb-2l6dd   1/1     Running   0          13m
pod/speaker-gr4l9                 1/1     Running   0          13m
pod/speaker-mfqkn                 1/1     Running   0          13m
pod/speaker-nlddv                 1/1     Running   0          13m
pod/speaker-p69vf                 1/1     Running   0          13m
pod/speaker-sdk67                 1/1     Running   0          13m
pod/speaker-zwnht                 1/1     Running   0          13m

NAME                      TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
service/webhook-service   ClusterIP   10.233.13.253   <none>        443/TCP   13m

NAME                     DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
daemonset.apps/speaker   6         6         6       6            6           kubernetes.io/os=linux   13m

NAME                         READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/controller   1/1     1            1           13m

NAME                                    DESIRED   CURRENT   READY   AGE
replicaset.apps/controller-6c58495cbb   1         1         1       13m

 

IP Address Pool 생성

root@k8s-master01:~# vi metallb_ip_pool.yaml
---
kind: IPAddressPool
metadata:
  name: first-pool
  namespace: metallb-system
spec:
  addresses:
  - 192.168.110.118-192.168.110.119
kubectl apply -f metallb_ip_pool.aml

 

Deployment & Service 생성

root@k8s-master01:~# vi nginx.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  type: LoadBalancer
  selector:
    app: nginx
  ports:
    - protocol: TCP
      port: 8088
      targetPort: 80
root@k8s-master01:~# kubectl apply -f nginx.yaml

 

EXTERNAL-IP 확인 및 호출

root@k8s-master01:~# kubectl get svc
NAME            TYPE           CLUSTER-IP      EXTERNAL-IP       PORT(S)          AGE
nginx-service   LoadBalancer   10.233.24.150   192.168.110.118   8088:30950/TCP   13m


root@k8s-master01:~# curl 192.168.110.118:8088
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

 

 

기타

이전 버전에서는 IP Address를 configmap으로 설정했는데, CR(Custom Resource) - IPAddressPool 에서 설정 하는 것으로 변경되었다.

 

 

참고)

https://metallb.universe.tf/installation/clouds/

'Kubernetes' 카테고리의 다른 글

OS 옵션 변경  (0) 2022.12.26
[kubespray] 1.22 -> 1.23 Upgrade: Calico Error  (0) 2022.12.01
[K8s] grafana Alerm list  (0) 2022.10.24
[rook-ceph] 스토리지 배포  (0) 2022.10.18
OS 점검 자동화(비공개)  (0) 2022.08.23

[Critical] Master Node
Master Kubelet Ready
Mtrics browser > sum(up(job="kubelet",metrics_path="/metrics/probes",node="master0.p"})
when min() of query(A,5m,now) is below 3

MasterNode Ready
sum(kube_node_status_condition{condition="Ready",node="master01p|master02p|master03p",status="true"})
when min() of query(A,5m,now) is below 3

MasterNode Unreachable
sum(kube_node_status_condition{job="kube-state-metrics",status="true",condition="NetworkUnavailable",node="lgestgbee0.p"})
when max() of query(A,5m,now) is above 0



[Critical] Kubernetes System Pod
kube-scheduler
sum(kube_pod_status_ready{condition="true",pod=~"kube-scheduler-master0.*"})
when min() of query(A,5m,now) is below 3

kube-apiserver
sum(kube_pod_status_ready{namespace="kube-system",pod=~"kube-apiserver-master0.*",condition="true"})
when min() of query(A,5m,now) is below 3

kube-controller-manager
sum(kube_pod_status_ready{namespace="kube-system",pod=~"kube-controller-manager-master0.*",condition="true"})
when min() of query(A,5m,now) is below 3



[Critical] etcd
ETCD Ready
sum(kube_pod_status_ready{namespace="kube-system",pod=~"etcd-master0.*",condition="true"})
when min() of query(A,5m,now) is below 3

ETCD No Leader
etcd_server_has_leader{job="kube-etcd"}
when min() of query(A,5m,now) is below 1



[Critical] 서비스
PersystentVolume Error Count
sum(kube_persistentvolume_status_phase{phase=~"Pending|Failed"})
when max() of query(A,5m,now) is above 0

Trident CSI Ready Count
sum(kube_pod_status_ready{namespace="trident",pod=~"trident-csi.*",condition="true"})
when min() of query(A,5m,now) is below 43 *노드 수

Ingress Controller Ready
kube_deployment_status_replicas_ready{deployment="ingress-ngninx-controller"}
when min() of query(A,5m,now) is below 1



[Warning] kubernetes
WorkerNode Ready
sum(kube_node_status_condition{node=~"worker0.*|bworker.*",status="true"})
when min() of query(A,5m,now) is below 43 *노드 수

kube-proxy Ready
avg(kube_pod_status_ready{namespace="kube-system",pod=~"kube-proxy-.*",condition="true"})
when min() of query(A,5m,now) is below 1

Kube Memory Pressure
kube_node_status_condition{condition="MemoryPressure",status="true"}
when avg() of query(A,5m,now) is above 0

Kube CPU Overcommit
kube_node_status_allocatable{resource="cpu"}
when max() of query(A,5m,now) is above 64

KubeClientCertificateExpiration
kubelet_certificate_manager_client_expiration_renew_errors{job="kubelet",metrics_path="/metrics"}
when avg() of query(A,5m,now) is above 0

 

 

 

 

 

 

'Kubernetes' 카테고리의 다른 글

[kubespray] 1.22 -> 1.23 Upgrade: Calico Error  (0) 2022.12.01
[LoadBalancer] MetalLB  (0) 2022.10.28
[rook-ceph] 스토리지 배포  (0) 2022.10.18
OS 점검 자동화(비공개)  (0) 2022.08.23
ETCD 백업&복구(비공개)  (0) 2022.07.30

목표

 - Kubernetes에 파일 및 블록 스토리지를 구성하기 위해 rook-ceph 를 구성 함

 

 

1. 구성 환경

(1) 서버 정보

192.168.110.90 sung-deploy #작업 서버
192.168.110.111 k8s-master01
192.168.110.112 k8s-master02
192.168.110.113 k8s-master03
192.168.110.114 k8s-worker01
192.168.110.115 k8s-worker02
192.168.110.116 k8s-worker03

 

(2) rook-ceph 용 디스크

k8s-worker01~03
/dev/vdb - 추가 디스크 20GB

*4Core/8GB에서 배포가 너무 느려서 8core/16GB로 진행

 

 

 

2. 배포 준비

(1) Helm Repository 구성

root@k8s-master01:~# helm repo add rook-release https://charts.rook.io/release

root@k8s-master01:~# helm repo list
NAME            URL
rook-release    https://charts.rook.io/release

 

(2) Rook Ceph 배포를 위한 설정 파일 다운로드

git clone https://github.com/rook/rook.git
git checkout release-1.9

 

(3) Helm Repository 구성

root@k8s-master01:~/rook/deploy/charts/rook-ceph# vi values.yaml
image:
  repository: rook/ceph
  tag: v1.9.8
  pullPolicy: IfNotPresent
  

root@k8s-master01:~/rook/deploy/charts/rook-ceph# cd ../rook-ceph-cluster/
root@k8s-master01:~/rook/deploy/charts/rook-ceph-cluster# vi values.yaml
toolbox:
  enabled: true   #변경
  image: rook/ceph:v1.9.8
  tolerations: []
  affinity: {}
  resources:
    limits:
      cpu: "500m"
      memory: "1Gi"
    requests:
      cpu: "100m"
      memory: "128Mi"

  storage: # cluster level storage configuration and selection
    useAllNodes: true
    useAllDevices: false
    deviceFilter: "^vd[b-z]"

 

(4) 배포

- rook ceph 배포

root@k8s-master01:~/rook/deploy/charts/rook-ceph#  helm install --create-namespace --namespace rook-ceph rook-ceph rook-release/rook-ceph -f values.yaml
NAME: rook-ceph
LAST DEPLOYED: Mon Oct 17 23:01:35 2022
NAMESPACE: rook-ceph
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The Rook Operator has been installed. Check its status by running:
  kubectl --namespace rook-ceph get pods -l "app=rook-ceph-operator"

Visit https://rook.io/docs/rook/latest for instructions on how to create and configure Rook clusters

Important Notes:
- You must customize the 'CephCluster' resource in the sample manifests for your cluster.
- Each CephCluster must be deployed to its own namespace, the samples use `rook-ceph` for the namespace.
- The sample manifests assume you also installed the rook-ceph operator in the `rook-ceph` namespace.
- The helm chart includes all the RBAC required to create a CephCluster CRD in the same namespace.
- Any disk devices you add to the cluster in the 'CephCluster' must be empty (no filesystem and no partitions).







root@k8s-master01:~/rook/deploy/charts/rook-ceph-cluster# helm install --namespace rook-ceph rook-ceph-cluster --set operatorNamespace=rook-ceph rook-release/rook-ceph-cluster -f values.yaml
NAME: rook-ceph-cluster
LAST DEPLOYED: Mon Oct 17 23:12:50 2022
NAMESPACE: rook-ceph
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The Ceph Cluster has been installed. Check its status by running:
  kubectl --namespace rook-ceph get cephcluster

Visit https://rook.io/docs/rook/latest/CRDs/ceph-cluster-crd/ for more information about the Ceph CRD.

Important Notes:
- You can only deploy a single cluster per namespace
- If you wish to delete this cluster and start fresh, you will also have to wipe the OSD disks using `sfdisk`

 

 

(5) Rook Ceph 배포 확인

root@k8s-master01:~# kubectl -n rook-ceph get all
NAME                                                         READY   STATUS      RESTARTS   AGE
pod/csi-cephfsplugin-5cq5z                                   3/3     Running     0          9m5s
pod/csi-cephfsplugin-8vwsk                                   3/3     Running     0          9m5s
pod/csi-cephfsplugin-provisioner-6444b9b9db-dtsk5            6/6     Running     0          9m2s
pod/csi-cephfsplugin-provisioner-6444b9b9db-jl72p            6/6     Running     0          9m4s
pod/csi-cephfsplugin-xs26z                                   3/3     Running     0          9m5s
pod/csi-rbdplugin-6rwjd                                      3/3     Running     0          9m5s
pod/csi-rbdplugin-fb5cx                                      3/3     Running     0          9m5s
pod/csi-rbdplugin-provisioner-69b4cfddfb-25ljb               6/6     Running     0          9m5s
pod/csi-rbdplugin-provisioner-69b4cfddfb-z28gn               6/6     Running     0          9m5s
pod/csi-rbdplugin-x7vn4                                      3/3     Running     0          9m5s
pod/rook-ceph-crashcollector-k8s-worker01-69d48b85b7-j5gf2   1/1     Running     0          98s
pod/rook-ceph-crashcollector-k8s-worker02-6d74467c48-8tlcn   1/1     Running     0          2m8s
pod/rook-ceph-crashcollector-k8s-worker03-5bb9b85fcf-4zlvb   1/1     Running     0          6m20s
pod/rook-ceph-mds-ceph-filesystem-a-8587497f74-pz5g8         1/1     Running     0          2m8s
pod/rook-ceph-mds-ceph-filesystem-b-5b848db44b-hwfwc         1/1     Running     0          99s
pod/rook-ceph-mgr-a-7c5946494c-5bpgx                         2/2     Running     0          6m20s
pod/rook-ceph-mgr-b-7758f6dc8f-m879n                         2/2     Running     0          6m19s
pod/rook-ceph-mon-a-5556c65f6c-w42fx                         1/1     Running     0          8m48s
pod/rook-ceph-mon-b-6d5c6854c6-769k6                         1/1     Running     0          7m10s
pod/rook-ceph-mon-c-76b9849478-8vcdz                         1/1     Running     0          6m50s
pod/rook-ceph-operator-6f66df9cdc-p78kx                      1/1     Running     0          11m
pod/rook-ceph-osd-0-576f684f6f-wcqft                         1/1     Running     0          3m59s
pod/rook-ceph-osd-1-6b5656b998-m86d6                         1/1     Running     0          4m4s
pod/rook-ceph-osd-2-c74677fbf-n4m9p                          1/1     Running     0          3m58s
pod/rook-ceph-osd-prepare-k8s-worker01-lp7fq                 0/1     Completed   0          17s
pod/rook-ceph-osd-prepare-k8s-worker02-xsw4r                 1/1     Running     0          10s
pod/rook-ceph-osd-prepare-k8s-worker03-nnccs                 0/1     Init:0/1    0          5s
pod/rook-ceph-tools-66df78cf69-w6q94                         1/1     Running     0          9m18s

NAME                                     TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)             AGE
service/csi-cephfsplugin-metrics         ClusterIP   10.233.44.102   <none>        8080/TCP,8081/TCP   9m
service/csi-rbdplugin-metrics            ClusterIP   10.233.8.158    <none>        8080/TCP,8081/TCP   9m5s
service/rook-ceph-mgr                    ClusterIP   10.233.46.239   <none>        9283/TCP            5m29s
service/rook-ceph-mgr-dashboard          ClusterIP   10.233.2.10     <none>        8443/TCP            5m29s
service/rook-ceph-mon-a                  ClusterIP   10.233.28.124   <none>        6789/TCP,3300/TCP   8m49s
service/rook-ceph-mon-b                  ClusterIP   10.233.30.201   <none>        6789/TCP,3300/TCP   7m11s
service/rook-ceph-mon-c                  ClusterIP   10.233.3.43     <none>        6789/TCP,3300/TCP   6m51s
service/rook-ceph-rgw-ceph-objectstore   ClusterIP   10.233.22.181   <none>        80/TCP              2m51s

NAME                              DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/csi-cephfsplugin   3         3         3       3            3           <none>          9m5s
daemonset.apps/csi-rbdplugin      3         3         3       3            3           <none>          9m5s

NAME                                                    READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/csi-cephfsplugin-provisioner            2/2     2            2           9m5s
deployment.apps/csi-rbdplugin-provisioner               2/2     2            2           9m5s
deployment.apps/rook-ceph-crashcollector-k8s-worker01   1/1     1            1           6m20s
deployment.apps/rook-ceph-crashcollector-k8s-worker02   1/1     1            1           6m19s
deployment.apps/rook-ceph-crashcollector-k8s-worker03   1/1     1            1           6m20s
deployment.apps/rook-ceph-mds-ceph-filesystem-a         1/1     1            1           2m8s
deployment.apps/rook-ceph-mds-ceph-filesystem-b         1/1     1            1           99s
deployment.apps/rook-ceph-mgr-a                         1/1     1            1           6m20s
deployment.apps/rook-ceph-mgr-b                         1/1     1            1           6m19s
deployment.apps/rook-ceph-mon-a                         1/1     1            1           8m48s
deployment.apps/rook-ceph-mon-b                         1/1     1            1           7m10s
deployment.apps/rook-ceph-mon-c                         1/1     1            1           6m50s
deployment.apps/rook-ceph-operator                      1/1     1            1           11m
deployment.apps/rook-ceph-osd-0                         1/1     1            1           3m59s
deployment.apps/rook-ceph-osd-1                         1/1     1            1           4m4s
deployment.apps/rook-ceph-osd-2                         1/1     1            1           3m58s
deployment.apps/rook-ceph-tools                         1/1     1            1           9m18s

NAME                                                               DESIRED   CURRENT   READY   AGE
replicaset.apps/csi-cephfsplugin-provisioner-6444b9b9db            2         2         2       9m5s
replicaset.apps/csi-rbdplugin-provisioner-69b4cfddfb               2         2         2       9m5s
replicaset.apps/rook-ceph-crashcollector-k8s-worker01-69d48b85b7   1         1         1       6m20s
replicaset.apps/rook-ceph-crashcollector-k8s-worker01-fcdb54d45    0         0         0       4m4s
replicaset.apps/rook-ceph-crashcollector-k8s-worker02-6c5fbbbd78   0         0         0       3m58s
replicaset.apps/rook-ceph-crashcollector-k8s-worker02-6d74467c48   1         1         1       6m19s
replicaset.apps/rook-ceph-crashcollector-k8s-worker03-5bb9b85fcf   1         1         1       6m20s
replicaset.apps/rook-ceph-mds-ceph-filesystem-a-8587497f74         1         1         1       2m8s
replicaset.apps/rook-ceph-mds-ceph-filesystem-b-5b848db44b         1         1         1       99s
replicaset.apps/rook-ceph-mgr-a-7c5946494c                         1         1         1       6m20s
replicaset.apps/rook-ceph-mgr-b-7758f6dc8f                         1         1         1       6m19s
replicaset.apps/rook-ceph-mon-a-5556c65f6c                         1         1         1       8m48s
replicaset.apps/rook-ceph-mon-b-6d5c6854c6                         1         1         1       7m10s
replicaset.apps/rook-ceph-mon-c-76b9849478                         1         1         1       6m50s
replicaset.apps/rook-ceph-operator-6f66df9cdc                      1         1         1       11m
replicaset.apps/rook-ceph-osd-0-576f684f6f                         1         1         1       3m59s
replicaset.apps/rook-ceph-osd-1-6b5656b998                         1         1         1       4m4s
replicaset.apps/rook-ceph-osd-2-c74677fbf                          1         1         1       3m58s
replicaset.apps/rook-ceph-tools-66df78cf69                         1         1         1       9m18s

NAME                                           COMPLETIONS   DURATION   AGE
job.batch/rook-ceph-osd-prepare-k8s-worker01   1/1           14s        17s
job.batch/rook-ceph-osd-prepare-k8s-worker02   0/1           10s        10s
job.batch/rook-ceph-osd-prepare-k8s-worker03   0/1           5s         6s

 

(6) 상태 확인

root@k8s-master01:~# kubectl -n rook-ceph exec -it rook-ceph-tools-66df78cf69-w6q94 -- bash
[rook@rook-ceph-tools-66df78cf69-w6q94 /]$ ceph -s
  cluster:
    id:     fa81657f-70c3-4ed8-acb8-b9be8e9d7273
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum a,b,c (age 9m)
    mgr: b(active, since 5m), standbys: a
    mds: 1/1 daemons up, 1 hot standby
    osd: 3 osds: 3 up (since 6m), 3 in (since 7m)

  data:
    volumes: 1/1 healthy
    pools:   11 pools, 177 pgs
    objects: 49 objects, 6.6 KiB
    usage:   36 MiB used, 60 GiB / 60 GiB avail
    pgs:     177 active+clean

  io:
    client:   1.3 KiB/s rd, 2 op/s rd, 0 op/s wr

  progress:

(7) Storage Class 구성 및 샘플 App 배포

root@k8s-master01:~/rook/deploy/examples/csi/rbd# kubectl apply -f storageclass.yaml
cephblockpool.ceph.rook.io/replicapool created
storageclass.storage.k8s.io/rook-ceph-block created

root@k8s-master01:~/rook/deploy/examples# kubectl apply -f wordpress.yaml
service/wordpress created
persistentvolumeclaim/wp-pv-claim created
deployment.apps/wordpress created

root@k8s-master01:~/rook/deploy/examples# kubectl get pod
NAME                               READY   STATUS    RESTARTS   AGE
wordpress-7b989dbf57-2jp9z         1/1     Running   0          6m25s
wordpress-mysql-6965fc8cc8-bhxk7   1/1     Running   0          8m25s

 

(8) MetalLB 설치

1. Namespace 생성
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/namespace.yaml

2. components 생성
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/metallb.yaml

3. secret 생성
kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"

4. ConfigMap 생성
---
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 192.168.110.118-192.168.110.119

 

(9) External IP 확인 - 192.168.110.119

root@k8s-master01:~# kubectl get svc
NAME              TYPE           CLUSTER-IP     EXTERNAL-IP       PORT(S)        AGE
httpd             LoadBalancer   10.233.23.78   192.168.110.118   80:30519/TCP   107m
kubernetes        ClusterIP      10.233.0.1     <none>            443/TCP        4h28m
wordpress         LoadBalancer   10.233.45.71   192.168.110.119   80:31562/TCP   107m
wordpress-mysql   ClusterIP      None           <none>            3306/TCP       141m

 

 

 

(10) 페이지 호출

*OpenStack 위에 구성된 K8s에서는 [오픈스택 - 네트워크 - 포트 - 포트편집에서 포트보안 옵션]을 해제해야 외부에서 호출 가능

 

 

 

 

3. 삭제

 

(1) 삭제

- rook-ceph cluster Remove

# kubectl --namespace rook-ceph patch cephcluster rook-ceph --type merge -p '{"spec":{"cleanupPolicy":{"confirmation":"yes-really-destroy-data"}}}'
cephcluster.ceph.rook.io/rook-ceph patched

# kubectl delete storageclasses ceph-block ceph-bucket ceph-filesystem
storageclass.storage.k8s.io "ceph-block" deleted
storageclass.storage.k8s.io "ceph-bucket" deleted
storageclass.storage.k8s.io "ceph-filesystem" deleted

#  kubectl --namespace rook-ceph delete cephblockpools ceph-blockpool
cephblockpool.ceph.rook.io "ceph-blockpool" deleted

# kubectl --namespace rook-ceph delete cephobjectstore ceph-objectstore
cephobjectstore.ceph.rook.io "ceph-objectstore" deleted

# kubectl --namespace rook-ceph delete cephfilesystem ceph-filesystem
cephfilesystem.ceph.rook.io "ceph-filesystem" deleted

# kubectl --namespace rook-ceph delete cephcluster rook-ceph
cephcluster.ceph.rook.io "rook-ceph" deleted

# helm --namespace rook-ceph uninstall rook-ceph-cluster
release "rook-ceph-cluster" uninstalled

 

(2) Helm Uninstall

# helm --namespace rook-ceph uninstall rook-ceph
...
release "rook-ceph" uninstalled


# kubectl delete crds cephblockpools.ceph.rook.io cephbucketnotifications.ceph.rook.io \
                      cephbuckettopics.ceph.rook.io \
                      cephclients.ceph.rook.io cephclusters.ceph.rook.io cephfilesystemmirrors.ceph.rook.io \
                      cephfilesystems.ceph.rook.io cephfilesystemsubvolumegroups.ceph.rook.io \
                      cephnfses.ceph.rook.io cephobjectrealms.ceph.rook.io cephobjectstores.ceph.rook.io \
                      cephobjectstoreusers.ceph.rook.io cephobjectzonegroups.ceph.rook.io cephobjectzones.ceph.rook.io \
                      cephrbdmirrors.ceph.rook.io objectbucketclaims.objectbucket.io objectbuckets.objectbucket.io

 

(3) Disk Data remove

# All Node (Ceph nodes)
all# rm -rf /var/lib/rook

all# export DISK="/dev/vdb"
all# sgdisk --zap-all $DISK
all# dd if=/dev/zero of="$DISK" bs=1M count=100 oflag=direct,dsync
all# blkdiscard $DISK
all# partprobe $DISK

 

 (4) api - namespace 삭제

DELNS=rook-ceph

kubectl proxy & curl -k -H "Content-Type: application/json" -X PUT --data-binary $(kubectl create ns ${DELNS} --dry-run -o json | jq -c '. += {"status":{"finalizers"}}') http://127.0.0.1:8001/api/v1/namespaces/${DELNS}/finalize

 

(참고) github

 

 

'Kubernetes' 카테고리의 다른 글

[LoadBalancer] MetalLB  (0) 2022.10.28
[K8s] grafana Alerm list  (0) 2022.10.24
OS 점검 자동화(비공개)  (0) 2022.08.23
ETCD 백업&복구(비공개)  (0) 2022.07.30
Docker IP 대역 변경  (0) 2021.11.11

1. 현상

- docker0 인터페이스 대역이 사내 서버 ip와 겹칠 경우 통신이 안되는 현상

 

 

2. 조치

2.1 OS에서 routing 설정으로 해결 가능

 

2.2 docker0 대역을 변경

(1) kubespray의 경우 docker.yml 파일에서 docker 옵션으로 변경 가능

...
docker_options: "--bip 192.168.0.1/16"

 

 

(2) 이미 클러스터가 배포되어 있는 경우

 

/etc/docker/daemon.json 파일 수정

{
    "bip":"192.168.0.1/16"
}

 

docker 재기동

# systemctl restart docker

 

*restart docker하면 바껴야 정정상데 vm 환경이라그런지 적용이 안되는 서버들이 있음. 여러번 restart 해서 적용이되기도 하는데... reboot하면 바로 해결

 

 

3. 확인

# ip a | grep docker0

 

 

 

############# 추가 ############# 

 

deamon.json 파일에 bip 옵션을 넣을 경우 클러스터 삭제/재생성이 제대로 되지 않음

-> 다른 파일에도 bip 옵션이 있어서 옵션 충돌이 남

 

deamon.json 파일이 아닌 /etc/systemd/system/docker.service.d/docker-options.conf 파일을 수정해야 함

 

 

 

 

 

 


csi-nfs-driverinfo.yaml에 옵션 추가

apiVersion: storage.k8s.io/v1beta1
kind: CSIDriver
metadata:
  name: nfs.csi.k8s.io
spec:
  attachRequired: false
  fsGroupPolicy: File     #옵션추가
  volumeLifecycleModes:
    - Persistent

'Kubernetes' 카테고리의 다른 글

ETCD 백업&복구(비공개)  (0) 2022.07.30
Docker IP 대역 변경  (0) 2021.11.11
ETCD 백업 & 복구 스크립트  (0) 2021.11.09
[context] 원격지 클러스터 context 적용 스크립트  (0) 2021.11.09
[kubespray] 배포 방화벽  (0) 2021.11.01

+ Recent posts