목표

 - Kubernetes에 파일 및 블록 스토리지를 구성하기 위해 rook-ceph 를 구성 함

 

 

1. 구성 환경

(1) 서버 정보

192.168.110.90 sung-deploy #작업 서버
192.168.110.111 k8s-master01
192.168.110.112 k8s-master02
192.168.110.113 k8s-master03
192.168.110.114 k8s-worker01
192.168.110.115 k8s-worker02
192.168.110.116 k8s-worker03

 

(2) rook-ceph 용 디스크

k8s-worker01~03
/dev/vdb - 추가 디스크 20GB

*4Core/8GB에서 배포가 너무 느려서 8core/16GB로 진행

 

 

 

2. 배포 준비

(1) Helm Repository 구성

root@k8s-master01:~# helm repo add rook-release https://charts.rook.io/release

root@k8s-master01:~# helm repo list
NAME            URL
rook-release    https://charts.rook.io/release

 

(2) Rook Ceph 배포를 위한 설정 파일 다운로드

git clone https://github.com/rook/rook.git
git checkout release-1.9

 

(3) Helm Repository 구성

root@k8s-master01:~/rook/deploy/charts/rook-ceph# vi values.yaml
image:
  repository: rook/ceph
  tag: v1.9.8
  pullPolicy: IfNotPresent
  

root@k8s-master01:~/rook/deploy/charts/rook-ceph# cd ../rook-ceph-cluster/
root@k8s-master01:~/rook/deploy/charts/rook-ceph-cluster# vi values.yaml
toolbox:
  enabled: true   #변경
  image: rook/ceph:v1.9.8
  tolerations: []
  affinity: {}
  resources:
    limits:
      cpu: "500m"
      memory: "1Gi"
    requests:
      cpu: "100m"
      memory: "128Mi"

  storage: # cluster level storage configuration and selection
    useAllNodes: true
    useAllDevices: false
    deviceFilter: "^vd[b-z]"

 

(4) 배포

- rook ceph 배포

root@k8s-master01:~/rook/deploy/charts/rook-ceph#  helm install --create-namespace --namespace rook-ceph rook-ceph rook-release/rook-ceph -f values.yaml
NAME: rook-ceph
LAST DEPLOYED: Mon Oct 17 23:01:35 2022
NAMESPACE: rook-ceph
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The Rook Operator has been installed. Check its status by running:
  kubectl --namespace rook-ceph get pods -l "app=rook-ceph-operator"

Visit https://rook.io/docs/rook/latest for instructions on how to create and configure Rook clusters

Important Notes:
- You must customize the 'CephCluster' resource in the sample manifests for your cluster.
- Each CephCluster must be deployed to its own namespace, the samples use `rook-ceph` for the namespace.
- The sample manifests assume you also installed the rook-ceph operator in the `rook-ceph` namespace.
- The helm chart includes all the RBAC required to create a CephCluster CRD in the same namespace.
- Any disk devices you add to the cluster in the 'CephCluster' must be empty (no filesystem and no partitions).







root@k8s-master01:~/rook/deploy/charts/rook-ceph-cluster# helm install --namespace rook-ceph rook-ceph-cluster --set operatorNamespace=rook-ceph rook-release/rook-ceph-cluster -f values.yaml
NAME: rook-ceph-cluster
LAST DEPLOYED: Mon Oct 17 23:12:50 2022
NAMESPACE: rook-ceph
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The Ceph Cluster has been installed. Check its status by running:
  kubectl --namespace rook-ceph get cephcluster

Visit https://rook.io/docs/rook/latest/CRDs/ceph-cluster-crd/ for more information about the Ceph CRD.

Important Notes:
- You can only deploy a single cluster per namespace
- If you wish to delete this cluster and start fresh, you will also have to wipe the OSD disks using `sfdisk`

 

 

(5) Rook Ceph 배포 확인

root@k8s-master01:~# kubectl -n rook-ceph get all
NAME                                                         READY   STATUS      RESTARTS   AGE
pod/csi-cephfsplugin-5cq5z                                   3/3     Running     0          9m5s
pod/csi-cephfsplugin-8vwsk                                   3/3     Running     0          9m5s
pod/csi-cephfsplugin-provisioner-6444b9b9db-dtsk5            6/6     Running     0          9m2s
pod/csi-cephfsplugin-provisioner-6444b9b9db-jl72p            6/6     Running     0          9m4s
pod/csi-cephfsplugin-xs26z                                   3/3     Running     0          9m5s
pod/csi-rbdplugin-6rwjd                                      3/3     Running     0          9m5s
pod/csi-rbdplugin-fb5cx                                      3/3     Running     0          9m5s
pod/csi-rbdplugin-provisioner-69b4cfddfb-25ljb               6/6     Running     0          9m5s
pod/csi-rbdplugin-provisioner-69b4cfddfb-z28gn               6/6     Running     0          9m5s
pod/csi-rbdplugin-x7vn4                                      3/3     Running     0          9m5s
pod/rook-ceph-crashcollector-k8s-worker01-69d48b85b7-j5gf2   1/1     Running     0          98s
pod/rook-ceph-crashcollector-k8s-worker02-6d74467c48-8tlcn   1/1     Running     0          2m8s
pod/rook-ceph-crashcollector-k8s-worker03-5bb9b85fcf-4zlvb   1/1     Running     0          6m20s
pod/rook-ceph-mds-ceph-filesystem-a-8587497f74-pz5g8         1/1     Running     0          2m8s
pod/rook-ceph-mds-ceph-filesystem-b-5b848db44b-hwfwc         1/1     Running     0          99s
pod/rook-ceph-mgr-a-7c5946494c-5bpgx                         2/2     Running     0          6m20s
pod/rook-ceph-mgr-b-7758f6dc8f-m879n                         2/2     Running     0          6m19s
pod/rook-ceph-mon-a-5556c65f6c-w42fx                         1/1     Running     0          8m48s
pod/rook-ceph-mon-b-6d5c6854c6-769k6                         1/1     Running     0          7m10s
pod/rook-ceph-mon-c-76b9849478-8vcdz                         1/1     Running     0          6m50s
pod/rook-ceph-operator-6f66df9cdc-p78kx                      1/1     Running     0          11m
pod/rook-ceph-osd-0-576f684f6f-wcqft                         1/1     Running     0          3m59s
pod/rook-ceph-osd-1-6b5656b998-m86d6                         1/1     Running     0          4m4s
pod/rook-ceph-osd-2-c74677fbf-n4m9p                          1/1     Running     0          3m58s
pod/rook-ceph-osd-prepare-k8s-worker01-lp7fq                 0/1     Completed   0          17s
pod/rook-ceph-osd-prepare-k8s-worker02-xsw4r                 1/1     Running     0          10s
pod/rook-ceph-osd-prepare-k8s-worker03-nnccs                 0/1     Init:0/1    0          5s
pod/rook-ceph-tools-66df78cf69-w6q94                         1/1     Running     0          9m18s

NAME                                     TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)             AGE
service/csi-cephfsplugin-metrics         ClusterIP   10.233.44.102   <none>        8080/TCP,8081/TCP   9m
service/csi-rbdplugin-metrics            ClusterIP   10.233.8.158    <none>        8080/TCP,8081/TCP   9m5s
service/rook-ceph-mgr                    ClusterIP   10.233.46.239   <none>        9283/TCP            5m29s
service/rook-ceph-mgr-dashboard          ClusterIP   10.233.2.10     <none>        8443/TCP            5m29s
service/rook-ceph-mon-a                  ClusterIP   10.233.28.124   <none>        6789/TCP,3300/TCP   8m49s
service/rook-ceph-mon-b                  ClusterIP   10.233.30.201   <none>        6789/TCP,3300/TCP   7m11s
service/rook-ceph-mon-c                  ClusterIP   10.233.3.43     <none>        6789/TCP,3300/TCP   6m51s
service/rook-ceph-rgw-ceph-objectstore   ClusterIP   10.233.22.181   <none>        80/TCP              2m51s

NAME                              DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/csi-cephfsplugin   3         3         3       3            3           <none>          9m5s
daemonset.apps/csi-rbdplugin      3         3         3       3            3           <none>          9m5s

NAME                                                    READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/csi-cephfsplugin-provisioner            2/2     2            2           9m5s
deployment.apps/csi-rbdplugin-provisioner               2/2     2            2           9m5s
deployment.apps/rook-ceph-crashcollector-k8s-worker01   1/1     1            1           6m20s
deployment.apps/rook-ceph-crashcollector-k8s-worker02   1/1     1            1           6m19s
deployment.apps/rook-ceph-crashcollector-k8s-worker03   1/1     1            1           6m20s
deployment.apps/rook-ceph-mds-ceph-filesystem-a         1/1     1            1           2m8s
deployment.apps/rook-ceph-mds-ceph-filesystem-b         1/1     1            1           99s
deployment.apps/rook-ceph-mgr-a                         1/1     1            1           6m20s
deployment.apps/rook-ceph-mgr-b                         1/1     1            1           6m19s
deployment.apps/rook-ceph-mon-a                         1/1     1            1           8m48s
deployment.apps/rook-ceph-mon-b                         1/1     1            1           7m10s
deployment.apps/rook-ceph-mon-c                         1/1     1            1           6m50s
deployment.apps/rook-ceph-operator                      1/1     1            1           11m
deployment.apps/rook-ceph-osd-0                         1/1     1            1           3m59s
deployment.apps/rook-ceph-osd-1                         1/1     1            1           4m4s
deployment.apps/rook-ceph-osd-2                         1/1     1            1           3m58s
deployment.apps/rook-ceph-tools                         1/1     1            1           9m18s

NAME                                                               DESIRED   CURRENT   READY   AGE
replicaset.apps/csi-cephfsplugin-provisioner-6444b9b9db            2         2         2       9m5s
replicaset.apps/csi-rbdplugin-provisioner-69b4cfddfb               2         2         2       9m5s
replicaset.apps/rook-ceph-crashcollector-k8s-worker01-69d48b85b7   1         1         1       6m20s
replicaset.apps/rook-ceph-crashcollector-k8s-worker01-fcdb54d45    0         0         0       4m4s
replicaset.apps/rook-ceph-crashcollector-k8s-worker02-6c5fbbbd78   0         0         0       3m58s
replicaset.apps/rook-ceph-crashcollector-k8s-worker02-6d74467c48   1         1         1       6m19s
replicaset.apps/rook-ceph-crashcollector-k8s-worker03-5bb9b85fcf   1         1         1       6m20s
replicaset.apps/rook-ceph-mds-ceph-filesystem-a-8587497f74         1         1         1       2m8s
replicaset.apps/rook-ceph-mds-ceph-filesystem-b-5b848db44b         1         1         1       99s
replicaset.apps/rook-ceph-mgr-a-7c5946494c                         1         1         1       6m20s
replicaset.apps/rook-ceph-mgr-b-7758f6dc8f                         1         1         1       6m19s
replicaset.apps/rook-ceph-mon-a-5556c65f6c                         1         1         1       8m48s
replicaset.apps/rook-ceph-mon-b-6d5c6854c6                         1         1         1       7m10s
replicaset.apps/rook-ceph-mon-c-76b9849478                         1         1         1       6m50s
replicaset.apps/rook-ceph-operator-6f66df9cdc                      1         1         1       11m
replicaset.apps/rook-ceph-osd-0-576f684f6f                         1         1         1       3m59s
replicaset.apps/rook-ceph-osd-1-6b5656b998                         1         1         1       4m4s
replicaset.apps/rook-ceph-osd-2-c74677fbf                          1         1         1       3m58s
replicaset.apps/rook-ceph-tools-66df78cf69                         1         1         1       9m18s

NAME                                           COMPLETIONS   DURATION   AGE
job.batch/rook-ceph-osd-prepare-k8s-worker01   1/1           14s        17s
job.batch/rook-ceph-osd-prepare-k8s-worker02   0/1           10s        10s
job.batch/rook-ceph-osd-prepare-k8s-worker03   0/1           5s         6s

 

(6) 상태 확인

root@k8s-master01:~# kubectl -n rook-ceph exec -it rook-ceph-tools-66df78cf69-w6q94 -- bash
[rook@rook-ceph-tools-66df78cf69-w6q94 /]$ ceph -s
  cluster:
    id:     fa81657f-70c3-4ed8-acb8-b9be8e9d7273
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum a,b,c (age 9m)
    mgr: b(active, since 5m), standbys: a
    mds: 1/1 daemons up, 1 hot standby
    osd: 3 osds: 3 up (since 6m), 3 in (since 7m)

  data:
    volumes: 1/1 healthy
    pools:   11 pools, 177 pgs
    objects: 49 objects, 6.6 KiB
    usage:   36 MiB used, 60 GiB / 60 GiB avail
    pgs:     177 active+clean

  io:
    client:   1.3 KiB/s rd, 2 op/s rd, 0 op/s wr

  progress:

(7) Storage Class 구성 및 샘플 App 배포

root@k8s-master01:~/rook/deploy/examples/csi/rbd# kubectl apply -f storageclass.yaml
cephblockpool.ceph.rook.io/replicapool created
storageclass.storage.k8s.io/rook-ceph-block created

root@k8s-master01:~/rook/deploy/examples# kubectl apply -f wordpress.yaml
service/wordpress created
persistentvolumeclaim/wp-pv-claim created
deployment.apps/wordpress created

root@k8s-master01:~/rook/deploy/examples# kubectl get pod
NAME                               READY   STATUS    RESTARTS   AGE
wordpress-7b989dbf57-2jp9z         1/1     Running   0          6m25s
wordpress-mysql-6965fc8cc8-bhxk7   1/1     Running   0          8m25s

 

(8) MetalLB 설치

1. Namespace 생성
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/namespace.yaml

2. components 생성
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/metallb.yaml

3. secret 생성
kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"

4. ConfigMap 생성
---
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 192.168.110.118-192.168.110.119

 

(9) External IP 확인 - 192.168.110.119

root@k8s-master01:~# kubectl get svc
NAME              TYPE           CLUSTER-IP     EXTERNAL-IP       PORT(S)        AGE
httpd             LoadBalancer   10.233.23.78   192.168.110.118   80:30519/TCP   107m
kubernetes        ClusterIP      10.233.0.1     <none>            443/TCP        4h28m
wordpress         LoadBalancer   10.233.45.71   192.168.110.119   80:31562/TCP   107m
wordpress-mysql   ClusterIP      None           <none>            3306/TCP       141m

 

 

 

(10) 페이지 호출

*OpenStack 위에 구성된 K8s에서는 [오픈스택 - 네트워크 - 포트 - 포트편집에서 포트보안 옵션]을 해제해야 외부에서 호출 가능

 

 

 

 

3. 삭제

 

(1) 삭제

- rook-ceph cluster Remove

# kubectl --namespace rook-ceph patch cephcluster rook-ceph --type merge -p '{"spec":{"cleanupPolicy":{"confirmation":"yes-really-destroy-data"}}}'
cephcluster.ceph.rook.io/rook-ceph patched

# kubectl delete storageclasses ceph-block ceph-bucket ceph-filesystem
storageclass.storage.k8s.io "ceph-block" deleted
storageclass.storage.k8s.io "ceph-bucket" deleted
storageclass.storage.k8s.io "ceph-filesystem" deleted

#  kubectl --namespace rook-ceph delete cephblockpools ceph-blockpool
cephblockpool.ceph.rook.io "ceph-blockpool" deleted

# kubectl --namespace rook-ceph delete cephobjectstore ceph-objectstore
cephobjectstore.ceph.rook.io "ceph-objectstore" deleted

# kubectl --namespace rook-ceph delete cephfilesystem ceph-filesystem
cephfilesystem.ceph.rook.io "ceph-filesystem" deleted

# kubectl --namespace rook-ceph delete cephcluster rook-ceph
cephcluster.ceph.rook.io "rook-ceph" deleted

# helm --namespace rook-ceph uninstall rook-ceph-cluster
release "rook-ceph-cluster" uninstalled

 

(2) Helm Uninstall

# helm --namespace rook-ceph uninstall rook-ceph
...
release "rook-ceph" uninstalled


# kubectl delete crds cephblockpools.ceph.rook.io cephbucketnotifications.ceph.rook.io \
                      cephbuckettopics.ceph.rook.io \
                      cephclients.ceph.rook.io cephclusters.ceph.rook.io cephfilesystemmirrors.ceph.rook.io \
                      cephfilesystems.ceph.rook.io cephfilesystemsubvolumegroups.ceph.rook.io \
                      cephnfses.ceph.rook.io cephobjectrealms.ceph.rook.io cephobjectstores.ceph.rook.io \
                      cephobjectstoreusers.ceph.rook.io cephobjectzonegroups.ceph.rook.io cephobjectzones.ceph.rook.io \
                      cephrbdmirrors.ceph.rook.io objectbucketclaims.objectbucket.io objectbuckets.objectbucket.io

 

(3) Disk Data remove

# All Node (Ceph nodes)
all# rm -rf /var/lib/rook

all# export DISK="/dev/vdb"
all# sgdisk --zap-all $DISK
all# dd if=/dev/zero of="$DISK" bs=1M count=100 oflag=direct,dsync
all# blkdiscard $DISK
all# partprobe $DISK

 

 (4) api - namespace 삭제

DELNS=rook-ceph

kubectl proxy & curl -k -H "Content-Type: application/json" -X PUT --data-binary $(kubectl create ns ${DELNS} --dry-run -o json | jq -c '. += {"status":{"finalizers"}}') http://127.0.0.1:8001/api/v1/namespaces/${DELNS}/finalize

 

(참고) github

 

 

'Kubernetes' 카테고리의 다른 글

[LoadBalancer] MetalLB  (0) 2022.10.28
[K8s] grafana Alerm list  (0) 2022.10.24
OS 점검 자동화(비공개)  (0) 2022.08.23
ETCD 백업&복구(비공개)  (0) 2022.07.30
Docker IP 대역 변경  (0) 2021.11.11

+ Recent posts