containerd 사용할 경우 insecure registry 설정 방법

 

버전 - containerd://1.6.15

 

 

1. 설정 변경

root@bee-master01:/etc/containerd# cat /etc/containerd/config.toml
version = 2
root = "/var/lib/containerd"
state = "/run/containerd"
oom_score = 0

[grpc]
  max_recv_message_size = 16777216
  max_send_message_size = 16777216

[debug]
  level = "info"

[metrics]
  address = ""
  grpc_histogram = false

[plugins]
  [plugins."io.containerd.grpc.v1.cri"]
    sandbox_image = "registry.k8s.io/pause:3.8"
    max_container_log_line_size = -1
    enable_unprivileged_ports = false
    enable_unprivileged_icmp = false
    [plugins."io.containerd.grpc.v1.cri".containerd]
      default_runtime_name = "runc"
      snapshotter = "overlayfs"
      [plugins."io.containerd.grpc.v1.cri".containerd.runtimes]
        [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
          runtime_type = "io.containerd.runc.v2"
          runtime_engine = ""
          runtime_root = ""
          base_runtime_spec = "/etc/containerd/cri-base.json"

          [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
            systemdCgroup = true
    [plugins."io.containerd.grpc.v1.cri".registry]
      [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
          endpoint = ["https://registry-1.docker.io"]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."192.168.110.90:5000"] #추가
          endpoint = ["http://192.168.110.90:5000"]                                  #추가
      [plugins."io.containerd.grpc.v1.cri".registry.configs]                         #추가
        [plugins."io.containerd.grpc.v1.cri".registry.configs."192.168.110.90:5000".tls] #추가
          insecure_skp_verify = true        #추가

 

 

2. containerd 재기동

# systemctl restart containerd

 

 

3. image pull

root@bee-master01:/etc/containerd# crictl pull  192.168.110.90:5000/library/nginx:1.19
Image is up to date for sha256:f0b8a9a541369db503ff3b9d4fa6de561b300f7363920c2bff4577c6c24c5cf6

 

 

 

*ctr 명령어를 사용할 겨우에는 별도 옵션을 추가해야 함. 기본으로 containerd/config.toml 설정을 인식을 안한다고 함

'Kubernetes' 카테고리의 다른 글

[Nexus3] proxy docker registry 구성  (0) 2023.04.03
쿠버네티스 지식 조각모음(작성 중)  (0) 2023.03.26
[K8s ] v1.25 Cgroupv2, MemoryQoS  (0) 2023.03.18
[K8s] 유용한 명령어  (0) 2023.03.08
API 사용법  (0) 2023.02.21

1. nexus3 기동

root@sung-deploy:~# docker pull sonatype/nexus3

root@sung-deploy:~# mkdir -p /data2/nexus-data && chown -R 200 /data2/nexus-data/

root@sung-deploy:~# docker run -d -p 8081:8081 -p 5000:5000 --name nexus -v /data2/nexus-data:/nexus-data sonatype/nexus3

 

 

 

2. Blob Store 생성

[docker-hub]

 

 

 

 

 

3. Repository 생성

[Create repository]

 

 

 

 

4. insecure-registry 추가

root@sung-deploy:~# cat /etc/docker/daemon.json
{
                "insecure-registries" : ["192.168.110.90:5000"]
}

 

 

5. docker 재기동

# systemctl restart docker
# docker start CONTAINER_NAME

 

 

 

6. image pull

root@sung-deploy:~# docker pull 192.168.110.90:5000/library/nginx:1.19


*library 붙여야 함

 

---

 

containerd 사용할 경우

버전 - containerd://1.6.15

root@bee-master01:/etc/containerd# cat /etc/containerd/config.toml
version = 2
root = "/var/lib/containerd"
state = "/run/containerd"
oom_score = 0

[grpc]
  max_recv_message_size = 16777216
  max_send_message_size = 16777216

[debug]
  level = "info"

[metrics]
  address = ""
  grpc_histogram = false

[plugins]
  [plugins."io.containerd.grpc.v1.cri"]
    sandbox_image = "registry.k8s.io/pause:3.8"
    max_container_log_line_size = -1
    enable_unprivileged_ports = false
    enable_unprivileged_icmp = false
    [plugins."io.containerd.grpc.v1.cri".containerd]
      default_runtime_name = "runc"
      snapshotter = "overlayfs"
      [plugins."io.containerd.grpc.v1.cri".containerd.runtimes]
        [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
          runtime_type = "io.containerd.runc.v2"
          runtime_engine = ""
          runtime_root = ""
          base_runtime_spec = "/etc/containerd/cri-base.json"

          [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
            systemdCgroup = true
    [plugins."io.containerd.grpc.v1.cri".registry]
      [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
          endpoint = ["https://registry-1.docker.io"]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."192.168.110.90:5000"] #추가
          endpoint = ["http://192.168.110.90:5000"]                                  #추가
      [plugins."io.containerd.grpc.v1.cri".registry.configs]                         #추가
        [plugins."io.containerd.grpc.v1.cri".registry.configs."192.168.110.90:5000".tls] #추가
          insecure_skp_verify = true        #추가

 

 

 

 

 

 

'Kubernetes' 카테고리의 다른 글

[containerd] insecure registry  (0) 2023.04.05
쿠버네티스 지식 조각모음(작성 중)  (0) 2023.03.26
[K8s ] v1.25 Cgroupv2, MemoryQoS  (0) 2023.03.18
[K8s] 유용한 명령어  (0) 2023.03.08
API 사용법  (0) 2023.02.21

 

1. kube-controller-manager 기능 및 역할, 종류

컨트롤러 프로세스를 실행하는 컨트롤 플레인 컴포넌트.

논리적으로, 각 컨트롤러는 분리된 프로세스이지만, 복잡성을 낮추기 위해 모두 단일 바이너리로 컴파일되고 단일 프로세스 내에서 실행된다.

이들 컨트롤러는 다음을 포함한다.

  • 노드 컨트롤러: 노드가 다운되었을 때 통지와 대응에 관한 책임을 가진다.
  • 잡 컨트롤러: 일회성 작업을 나타내는 잡 오브젝트를 감시한 다음, 해당 작업을 완료할 때까지 동작하는 파드를 생성한다.
  • 엔드포인트 컨트롤러: 엔드포인트 오브젝트를 채운다(즉, 서비스와 파드를 연결시킨다.)
  • 서비스 어카운트 & 토큰 컨트롤러: 새로운 네임스페이스에 대한 기본 계정과 API 접근 토큰을 생성한다

 

 

2. 대규모 클러스터에서 apiserver, etcd 등에 부하가 발생할 경우 해결법은?

-> resource limit 수정(cpu), ssd 사용 이외에 아키텍쳐 등 변화로 가능할지?

-> gpu를 사용하면 빨라지지 않을까?

 

 

3. 쿠버네티스 인증 체계

계정인증

권한인가

...

 

4. apiserver에 명령을 보내면, OS에서 어떻게 처리될까?

...

 

5. probe 종류 및 설명

프로브 종류

kubelet은 실행 중인 컨테이너들에 대해서 선택적으로 세 가지 종류의 프로브를 수행하고 그에 반응할 수 있다.

 

livenessProbe컨테이너가 동작 중인지 여부를 나타낸다.

만약 활성 프로브(liveness probe)에 실패한다면, kubelet은 컨테이너를 죽이고, 해당 컨테이너는 재시작 정책의 대상이 된다. 만약 컨테이너가 활성 프로브를 제공하지 않는 경우, 기본 상태는 Success 이다.

 

readinessProbe컨테이너가 요청을 처리할 준비가 되었는지 여부를 나타낸다.

만약 준비성 프로브(readiness probe)가 실패한다면, 엔드포인트 컨트롤러는 파드에 연관된 모든 서비스들의 엔드포인트에서 파드의 IP주소를 제거한다. 준비성 프로브의 초기 지연 이전의 기본 상태는 Failure 이다. 만약 컨테이너가 준비성 프로브를 지원하지 않는다면, 기본 상태는 Success 이다.

 

startupProbe컨테이너 내의 애플리케이션이 시작되었는지를 나타낸다.

스타트업 프로브(startup probe)가 주어진 경우, 성공할 때까지 다른 나머지 프로브는 활성화되지 않는다. 만약 스타트업 프로브가 실패하면, kubelet이 컨테이너를 죽이고, 컨테이너는 재시작 정책에 따라 처리된다. 컨테이너에 스타트업 프로브가 없는 경우, 기본 상태는 Success 이다.

 

 

6. Network Policy

https://kubernetes.io/ko/docs/concepts/services-networking/network-policies/

 

 

 

7. RBAC 모범사례

https://kubernetes.io/ko/docs/concepts/security/rbac-good-practices/

 

 

 

 

 

 

 

 

 

 

'Kubernetes' 카테고리의 다른 글

[containerd] insecure registry  (0) 2023.04.05
[Nexus3] proxy docker registry 구성  (0) 2023.04.03
[K8s ] v1.25 Cgroupv2, MemoryQoS  (0) 2023.03.18
[K8s] 유용한 명령어  (0) 2023.03.08
API 사용법  (0) 2023.02.21

1. cgroup

 

cgroup 이란 cpu, memory 등 리소스를 제한/관리하는 리눅스 커널의 기능이며,

리눅스 커널은 cgroup v1, cgroup v2 두 가지 버전의 cgroup을 제공합니다.

 

cgroup v2는 2016년부터 리눅스 커널에서 개발되어 왔으며,

K8s v1.25 버전부터 cgroup v2 기능이 GA (general availability) 되었습니다.

 

 

cgroup v2에서 향상된 기능은 아래와 같습니다.

  • API의 단일 통합 계층 설계
  • 컨테이너에 대한 더 안전한 하위 트리 위임
  • Pressure Stall Information 와 같은 최신 기능
  • 향상된 리소스 할당 관리 및 여러 리소스 간 격리
    • 다양한 유형의 메모리 할당(네트워크 및 커널 메모리 등)에 대한 통합 계정
    • 페이지 캐시 쓰기 되돌림과 같은 즉각적이지 않은 리소스 변경에 대한 설명

 

일부 Kubernetes 기능은 향상된 리소스 관리 및 격리를 위해 cgroup v2를 독점적으로 사용합니다. 예를 들어 MemoryQoS 기능은 메모리 활용도를 개선하고 이를 활성화하기 위해 cgroup v2 기능에 의존합니다. kubelet의 새로운 리소스 관리 기능은 앞으로 나아가는 새로운 cgroup v2 기능을 활용할 것입니다.

 

 

Ubuntu 21.10 버전부터 cgroup v2가 기본으로 설정되어있지만, 이하 버전에서도 커널 5.8 이상(권장)에서 cgroup v2를 사용하도록 설정할 수 있습니다.

 

 

Ubuntu 20.04에서 설정방법

### cgroup v2 사용설정 방법
1. systemd.unified_cgroup_hierarchy=1 옵션 추가
/etc/default/grub 파일의 [grep GRUB_CMDLINE_LINUX=""] 항목에 옵션 추가
-> GRUB_CMDLINE_LINUX="systemd.unified_cgroup_hierarchy=1"


2. grub 업데이트
$ update-grub


3. 재기동
$ reboot


### 확인방법
1.
root@k8s-worker02:~# stat -c %T -f /sys/fs/cgroup
cgroup2fs

# 결과가 cgroup2fs이면 cgroupv2를 사용하는 것
# 결과가 tmpfs이면 cgroupv1을 사용하는 것


2.
아래 파일이 있으면, cgroupv2를 사용하는 것
root@k8s-worker02:~# ls -l /sys/fs/cgroup/cgroup.controllers
-r--r--r-- 1 root root 0 Mar  7 01:20 /sys/fs/cgroup/cgroup.controllers

 

참고)

https://sleeplessbeastie.eu/2021/09/10/how-to-enable-control-group-v2/

https://kubernetes.io/docs/concepts/architecture/cgroups/#check-cgroup-version

https://kubernetes.io/blog/2022/08/31/cgroupv2-ga-1-25/


 

2. MemoryQoS

MemoryQoS는 k8s v1.22에서 추가된 기능으로 현재까지 alpha 단계입니다.

K8s feature-gate 옵션으로 활성화하여 사용 할 수 있습니다.

 

cgroup v1에서는 cpu_share, cpu_set, cpu_quota, cpu_period와 같이 CPU 리소스만 제한할 수 있으며, MemoryQoS는 사용할 수 없으며, cgroup v2에서는 memoryQoS 기능을 지원합니다.

  • 파드 및 컨테이너 메모리 요청 및 제한에 대한 메모리 가용성에 대한 보장을 제공합니다.
  • 노드 리소스의 메모리 가용성에 대한 보장 제공
  • 파드 및 컨테이너 수준 cgroup에 대한 새로운 cgroup v2 memory knobs(memory.min/memory.high) 사용
  • 노드 레벨 cgroup에 대한 새로운 cgroup v2  memory knobs(memory.min) 사용
requests.memory cgroup이 항상 보유해야 하는 최소 메모리 양을 지정
memory.max cgroup의 메모리 사용이 이 제한에 도달하고 줄일 수 없는 경우 cgroup에서 시스템 OOM 킬러가 호출
특정 상황에서 사용량이 일시적으로 memory.high 한도를 초과할 수 있습니다.
memory.low the best-effort memory protection
memory.high 메모리 사용량 스로틀 제한
memory.high = limits.memory/node allocatable memory

 

memory.high 계산 방법

[k8s v1.22]

requests.memory=50, limits.memory=100 경우 조절 계수 0.8을 곱하여 memory.high=80이 됩니다.

throttling factor = 0.8

memory.high=(limits.memory or node allocatable memory) * memory throttling factor, 
where default value of memory throttling factor is set to 0.8

 

[k8s v1.27]

컨테이너 메모리 제한이 지정되지 않은 경우 조절 계수 0.9를 곱하여 적용됩니다.

throttling factor = 0.9

memory.high=floor[(requests.memory + memory throttling factor * (limits.memory or node allocatable memory - requests.memory))/pageSize] * pageSize, where default value of memory throttling factor is set to 0.9

 

 

 

 

k8s v1.25에서 MemoryQoS 활성화 방법

# MemoryQoS 활성화
/etc/kubernetes/kubelet-config.yamlS에 아래 옵션 추가 후 kubelet 재기동
...
featureGates:
  MemoryQoS: true
...


#확인방법
root@k8s-worker02:~# ctr -n k8s.io c info 8b9e8c3f8815425dafea3530281fb967c23f97acd2a83009708b9ce46a6755c4 | grep memory
                "memory": {
                    "memory.high": "107374182",
                    "memory.min": "67108864"
                    
 
 
*kubespray에서 설정방법
vi ~/roles/kubespray-defaults/defaults/main.yaml
kubelet_feature_gates: []
-> kubelet_feature_gates: [MemoryQoS=true]

 

*cgroup v2를 사용해야 MemoryQoS를 활성화 할 수 있습니다.

 

 

 

참고)

https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/2570-memory-qos/#readme

https://kubernetes.io/blog/2021/11/26/qos-memory-resources/

 

 


추가

*MemoryQoS 기능 활성화 후 비정상적인 부하 증가로 인하여, 기능 disable 함.

'Kubernetes' 카테고리의 다른 글

[Nexus3] proxy docker registry 구성  (0) 2023.04.03
쿠버네티스 지식 조각모음(작성 중)  (0) 2023.03.26
[K8s] 유용한 명령어  (0) 2023.03.08
API 사용법  (0) 2023.02.21
Ceph-CSI 구성  (0) 2023.02.10

사용중인 컨테이너 이미지 리스트 확인

root@k8s-master01:~# kubectl get pods --all-namespaces -o jsonpath="{.items[*].spec.containers[*].image}" |tr -s '[[:space:]]' '\n' |sort | uniq -c
      2 docker.io/library/nginx:1.23.2-alpine
      1 quay.io/calico/kube-controllers:v3.24.5
      5 quay.io/calico/node:v3.24.5
      3 quay.io/coreos/etcd:v3.5.6
      2 registry.k8s.io/coredns/coredns:v1.9.3
      1 registry.k8s.io/cpa/cluster-proportional-autoscaler-amd64:1.8.5
      5 registry.k8s.io/dns/k8s-dns-node-cache:1.21.1
      3 registry.k8s.io/kube-apiserver:v1.25.6
      3 registry.k8s.io/kube-controller-manager:v1.25.6
      5 registry.k8s.io/kube-proxy:v1.25.6
      3 registry.k8s.io/kube-scheduler:v1.25.6

 

 

이벤트 정렬

root@k8s-master01:~# kubectl -n kube-system get event --sort-by='lastTimestamp'

'Kubernetes' 카테고리의 다른 글

쿠버네티스 지식 조각모음(작성 중)  (0) 2023.03.26
[K8s ] v1.25 Cgroupv2, MemoryQoS  (0) 2023.03.18
API 사용법  (0) 2023.02.21
Ceph-CSI 구성  (0) 2023.02.10
OS 옵션 변경  (0) 2022.12.26
k8s 1.24버전부터 sa 생성시 token이 자동으로 생성되지 않음.
기본적으로 영구적인 토큰을 발행하지 않으려고 변경된 듯 함



root@k8s-master01:/etc/kubernetes/ssl# kubectl -n kube-system get clusterrole | grep admin
admin                                                                  2022-12-28T08:53:35Z
cluster-admin                                                          2022-12-28T08:53:35Z
system:aggregate-to-admin                                              2022-12-28T08:53:35Z
system:kubelet-api-admin                                               2022-12-28T08:53:35Z




kubectl create sa admin-user -n kube-system

kubectl create clusterrolebinding --clusterrole=cluster-admin admin-user --serviceaccount=kube-system:admin-user


root@k8s-master01:/etc/kubernetes/ssl# kubectl -n kube-system create token admin-user
eyJhbGciOiJSUzI1NiIsImtpZCI6ImlVbmVfYWR4VE1kMlJneHBMZFBVcUlqUFBpNFBLcTFfQVgwNFBIUEhOU0kifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNjc1MDU3MjM2LCJpYXQiOjE2NzUwNTM2MzYsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJhZG1pbi11c2VyIiwidWlkIjoiODNkYWI4MzYtNzgyNy00OWY0LWI1OGYtOTk5YzAwNWYwNTcxIn19LCJuYmYiOjE2NzUwNTM2MzYsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTphZG1pbi11c2VyIn0.LdWBWVFX737--9ekdpe2I3sRTsRgz5jEvmYFUK_k_ZRopKIYDSoBXpiGXliTxzvsodtZF4H9ESQLjjYx9YYPOBEk-Pmw5BvPKMvcpx9futRIpz85U7_YNAeGphTJGJnHc4rIUILJ4cngJBjAhyi6XJ55bnT1EDYf3KSdPRtfnV0XmiJuEUb0qboCcaNdr9a3ltipitI6AcjYxqbzzPO4dHo6S4ay5aV6M26ZS_ZsAVI64_oLWX641B-skkZlrP4FoJluZvoHZHbKi_AkvnC2VCIoUCmpcR36uH8j-9ZMUMy9gGjQSxXy_NekXJzm5PSz9Qx5VczyP7Pt89XG3L5X1w

root@k8s-master01:~# cat secret.yml
apiVersion: v1
kind: Secret
type: kubernetes.io/service-account-token
metadata:
  name: admin-user-secret
  namespace: kube-system
  annotations:
    kubernetes.io/service-account.name: "admin-user"

root@k8s-master01:~# kubectl apply -f secret.yml
secret/admin-user-secret created

root@k8s-master01:~# kubectl -n kube-system get secret
NAME                TYPE                                  DATA   AGE
admin-user-secret   kubernetes.io/service-account-token   3      5s



root@k8s-master01:~# kubectl -n kube-system get secrets -o yaml
apiVersion: v1
items:
- apiVersion: v1
  data:
    ca.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMvakNDQWVhZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeU1USXlPREE0TlRNeU1Gb1hEVE15TVRJeU5UQTROVE15TUZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBUFVaCnI5MFgvVjErWVk0aXk4UXdwR0QxL0krckt4Q0dDbUxqVFM3T1RZblE4UTJ5ZXVNa3J5djN3bmg3d2J4NldMK0gKdDNKWm16c3NwdXZGMnM2ZjEza1FFaExUa1lnck5HT3RIRWgyZDF0R25XTTEwdjdaMUQyeldsNjJxeGRwQlh2UQpHTjhxbU9TUEtBcUIrOUhXSlFlYmxTUUN3NWtXWGZPbFVJM240dmNqWHphL20zeUhpSHgyQlgxdWlEZUJrZ1JrCkxkNzU1R1ZHNGpoWUFQcWswOHJVRU9jQ3ZtSEYzaVk4K0Evc1NWWUlJZmZMRTVQZUVSL2JWMzBUWG9LWkIxeFkKN0ppNjVITFJ6Y2tyTWVXeTBLazlFSjJRY09PcWNnQzl5NndBWkY0K2Q0RHhvRGJkQUZzdjRSc2NXRmdaKzc2UApHQ2dsRWFjOVJMNGk2RXIraDBFQ0F3RUFBYU5aTUZjd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZOVVVNU0RRczJhTVVJc3BMTWJ1ZVl3MmdVc01NQlVHQTFVZEVRUU8KTUF5Q0NtdDFZbVZ5Ym1WMFpYTXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBSjJKY290SjRUSnZ6VDVseHJJQQprVFRoNmFaM01XVzNUcEFVWWgydDI0U21WUmsyb3lFelZrSitVanJRcXU0Y1ZwZGVZNm5xemVHbHhMNi83V0NvCkE4Q01OdTZqWi9yc0ZMbnVReFJraGJIVFlKVmRGL0Y2ZHJOeFJDbGV3RnppdUxwdDduTFFPbndURVoycFA3OFkKYzZadGxPREtoNWlLM2pPNFgxU1VxUEQxcGcxL0pwWGNUZ09IKy9Gam83TDZEMkNRWXBCZEJGUFpzQ1AyTGtGcwpycGI0aHpkVHZ0YzF1QWhEeW4xSmZZQnFvTG43dzlXQ3F2d0JmVWIvYW9pcjdPZXJ2eWpBUExDbUZDVVpQWjZwCjlvcUJhRTFoZWduTm14TVE5VSthRUp0dGIzcTRhOXAzSzlsQVFCRE40MGVReUNXdGRDOUpqV0wrTm56QThBZ0oKYU1BPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
    namespace: a3ViZS1zeXN0ZW0=
    token: ZXlKaGJHY2lPaUpTVXpJMU5pSXNJbXRwWkNJNkltbFZibVZmWVdSNFZFMWtNbEpuZUhCTVpGQlZjVWxxVUZCcE5GQkxjVEZmUVZnd05GQklVRWhPVTBraWZRLmV5SnBjM01pT2lKcmRXSmxjbTVsZEdWekwzTmxjblpwWTJWaFkyTnZkVzUwSWl3aWEzVmlaWEp1WlhSbGN5NXBieTl6WlhKMmFXTmxZV05qYjNWdWRDOXVZVzFsYzNCaFkyVWlPaUpyZFdKbExYTjVjM1JsYlNJc0ltdDFZbVZ5Ym1WMFpYTXVhVzh2YzJWeWRtbGpaV0ZqWTI5MWJuUXZjMlZqY21WMExtNWhiV1VpT2lKaFpHMXBiaTExYzJWeUxYTmxZM0psZENJc0ltdDFZbVZ5Ym1WMFpYTXVhVzh2YzJWeWRtbGpaV0ZqWTI5MWJuUXZjMlZ5ZG1salpTMWhZMk52ZFc1MExtNWhiV1VpT2lKaFpHMXBiaTExYzJWeUlpd2lhM1ZpWlhKdVpYUmxjeTVwYnk5elpYSjJhV05sWVdOamIzVnVkQzl6WlhKMmFXTmxMV0ZqWTI5MWJuUXVkV2xrSWpvaU9ETmtZV0k0TXpZdE56Z3lOeTAwT1dZMExXSTFPR1l0T1RrNVl6QXdOV1l3TlRjeElpd2ljM1ZpSWpvaWMzbHpkR1Z0T25ObGNuWnBZMlZoWTJOdmRXNTBPbXQxWW1VdGMzbHpkR1Z0T21Ga2JXbHVMWFZ6WlhJaWZRLmFiVUxQRWNnTkZkR2Q1aXg5Mnd1TjRHN3NfZ0t3b20zTWVMYXA3d2wydUxKQVRDaTZtTmFBLTlGOTQ5bVFBVVJKSGJLbTJvZUFkWlBhYnE5MXFDdXh4NEVZNGI5ejhEZF9rTFZqZGxyRlZXaVJJVzBjcnZ3OWlvX05sakV0VWZoT1VBX2o0THJ6VlpkeDhfWmNpclVObTJ0Qmh4NGR4Smp1YUlNVzB5eXFBd2xYeG1lZWlnZnhfOGJOTU9mRlNaSk9ZNXpPSGxybDNSWWMyNTRWWjc1aTlxQXNZVVpEVkxJRFpoeUZRQnVVOGNwUFB1eWhzVV9UdXJmcmdpTzlpUlpEUmNTWjg4WEszSmdVRXF2MnVTTXdtVEI0NURDWGZHN1U0bHBBRDdkQ3IzMTFONlREZ1ZxOHBwcFFhbURTS1lGZGhBX3dxS0tyLWJiWE55dnh4Yk15QQ==
  kind: Secret
  metadata:
    annotations:
      kubectl.kubernetes.io/last-applied-configuration: |
        {"apiVersion":"v1","kind":"Secret","metadata":{"annotations":{"kubernetes.io/service-account.name":"admin-user"},"name":"admin-user-secret","namespace":"kube-system"},"type":"kubernetes.io/service-account-token"}
      kubernetes.io/service-account.name: admin-user
      kubernetes.io/service-account.uid: 83dab836-7827-49f4-b58f-999c005f0571
    creationTimestamp: "2023-01-30T04:49:54Z"
    name: admin-user-secret
    namespace: kube-system
    resourceVersion: "7004507"
    uid: a25ebd8b-083e-4897-9860-fa3f25ac1ed9
  type: kubernetes.io/service-account-token
kind: List
metadata:
  resourceVersion: ""


root@k8s-master01:~# kubectl get secrets -n kube-system admin-user-secret -o jsonpath={.data.token} | base64 --decode
eyJhbGciOiJSUzI1NiIsImtpZCI6ImlVbmVfYWR4VE1kMlJneHBMZFBVcUlqUFBpNFBLcTFfQVgwNFBIUEhOU0kifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXNlY3JldCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJhZG1pbi11c2VyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiODNkYWI4MzYtNzgyNy00OWY0LWI1OGYtOTk5YzAwNWYwNTcxIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmFkbWluLXVzZXIifQ.abULPEcgNFdGd5ix92wuN4G7s_gKwom3MeLap7wl2uLJATCi6mNaA-9F949mQAURJHbKm2oeAdZPabq91qCuxx4EY4b9z8Dd_kLVjdlrFVWiRIW0crvw9io_NljEtUfhOUA_j4LrzVZdx8_ZcirUNm2tBhx4dxJjuaIMW0yyqAwlXxmeeigfx_8bNMOfFSZJOY5zOHlrl3RYc254VZ75i9qAsYUZDVLIDZhyFQBuU8cpPPuyhsU_TurfrgiO9iRZDRcSZ88XK3JgUEqv2uSMwmTB45DCXfG7U4lpAD7dCr311N6TDgVq8pppQamDSKYFdhA_wqKKr-bbXNyvxxbMyA



*참고
curl --cacert ca.crt --oauth2-bearer "<키>" https://....
curl --cacert ca.crt -H "Authorization: Bearer <키> https://192.168.110.111:6443/api/v1/nodes

curl --cacert ca.crt -H "Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6ImlVbmVfYWR4VE1kMlJneHBMZFBVcUlqUFBpNFBLcTFfQVgwNFBIUEhOU0kifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXNlY3JldCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJhZG1pbi11c2VyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiODNkYWI4MzYtNzgyNy00OWY0LWI1OGYtOTk5YzAwNWYwNTcxIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmFkbWluLXVzZXIifQ.abULPEcgNFdGd5ix92wuN4G7s_gKwom3MeLap7wl2uLJATCi6mNaA-9F949mQAURJHbKm2oeAdZPabq91qCuxx4EY4b9z8Dd_kLVjdlrFVWiRIW0crvw9io_NljEtUfhOUA_j4LrzVZdx8_ZcirUNm2tBhx4dxJjuaIMW0yyqAwlXxmeeigfx_8bNMOfFSZJOY5zOHlrl3RYc254VZ75i9qAsYUZDVLIDZhyFQBuU8cpPPuyhsU_TurfrgiO9iRZDRcSZ88XK3JgUEqv2uSMwmTB45DCXfG7U4lpAD7dCr311N6TDgVq8pppQamDSKYFdhA_wqKKr-bbXNyvxxbMyA" https://192.168.110.111:6443/api/v1/nodes | jq '[.items[] | select( .metadata.labels."node-role.kubernetes.io/master" | not) | .status.capacity.pods | tonumber] | add'

#특정 파드 정보 가져오기
curl --cacert ca.crt -H "Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6ImlVbmVfYWR4VE1kMlJneHBMZFBVcUlqUFBpNFBLcTFfQVgwNFBIUEhOU0kifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXNlY3JldCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJhZG1pbi11c2VyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiODNkYWI4MzYtNzgyNy00OWY0LWI1OGYtOTk5YzAwNWYwNTcxIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmFkbWluLXVzZXIifQ.abULPEcgNFdGd5ix92wuN4G7s_gKwom3MeLap7wl2uLJATCi6mNaA-9F949mQAURJHbKm2oeAdZPabq91qCuxx4EY4b9z8Dd_kLVjdlrFVWiRIW0crvw9io_NljEtUfhOUA_j4LrzVZdx8_ZcirUNm2tBhx4dxJjuaIMW0yyqAwlXxmeeigfx_8bNMOfFSZJOY5zOHlrl3RYc254VZ75i9qAsYUZDVLIDZhyFQBuU8cpPPuyhsU_TurfrgiO9iRZDRcSZ88XK3JgUEqv2uSMwmTB45DCXfG7U4lpAD7dCr311N6TDgVq8pppQamDSKYFdhA_wqKKr-bbXNyvxxbMyA" https://192.168.110.111:6443/api/v1/namespaces/kube-system/pods/calico-node-6jmxc



#토큰 변수 설정
export bearertoken=eyJhbGciOiJSUzI1NiIsImtpZCI6ImlVbmVfYWR4VE1kMlJneHBMZFBVcUlqUFBpNFBLcTFfQVgwNFBIUEhOU0kifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXNlY3JldCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJhZG1pbi11c2VyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiODNkYWI4MzYtNzgyNy00OWY0LWI1OGYtOTk5YzAwNWYwNTcxIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmFkbWluLXVzZXIifQ.abULPEcgNFdGd5ix92wuN4G7s_gKwom3MeLap7wl2uLJATCi6mNaA-9F949mQAURJHbKm2oeAdZPabq91qCuxx4EY4b9z8Dd_kLVjdlrFVWiRIW0crvw9io_NljEtUfhOUA_j4LrzVZdx8_ZcirUNm2tBhx4dxJjuaIMW0yyqAwlXxmeeigfx_8bNMOfFSZJOY5zOHlrl3RYc254VZ75i9qAsYUZDVLIDZhyFQBuU8cpPPuyhsU_TurfrgiO9iRZDRcSZ88XK3JgUEqv2uSMwmTB45DCXfG7U4lpAD7dCr311N6TDgVq8pppQamDSKYFdhA_wqKKr-bbXNyvxxbMyA



#nginx 파드 생성
root@k8s-master01:~# kubectl run --image nginx nginx-pod --dry-run=client -o json | jq -c .
{"kind":"Pod","apiVersion":"v1","metadata":{"name":"nginx-pod","creationTimestamp":null,"labels":{"run":"nginx-pod"}},"spec":{"containers":[{"name":"nginx-pod","image":"nginx","resources":{}}],"restartPolicy":"Always","dnsPolicy":"ClusterFirst"},"status":{}}

curl --cacert ca.crt --oauth2-bearer "$bearertoken" -X POST -H 'Content-Type: application/json' https://192.168.110.111:6443/api/v1/namespaces/default/pods --data '{"kind":"Pod","apiVersion":"v1","metadata":{"name":"nginx-pod","creationTimestamp":null,"labels":{"run":"nginx-pod"}},"spec":{"containers":[{"name":"nginx-pod","image":"nginx","resources":{}}],"restartPolicy":"Always","dnsPolicy":"ClusterFirst"},"status":{}}'


#nginx 파드 생성 다른방법
a.json 파일 생성 후 json 내용 삽입
curl --cacert ca.crt --oauth2-bearer "$bearertoken" -X POST -H 'Content-Type: application/json' https://192.168.110.111:6443/api/v1/namespaces/default/pods --data @a.json


#nginx 파드 삭제
kubectl delete pod nginx-pod --dry-run=client -o json | jq -c .
curl --cacert ca.crt --oauth2-bearer "$bearertoken" -X DELETE https://192.168.110.111:6443/api/v1/namespaces/default/pods/nginx-pod




#deployment 생성
kubectl create deployment nginx-deployment --image=nginx --dry-run=client -o json | jq -c .

root@k8s-master01:~# kubectl create deployment nginx-deployment --image=nginx --dry-run=client -o json | jq -c .
{"kind":"Deployment","apiVersion":"apps/v1","metadata":{"name":"nginx-deployment","creationTimestamp":null,"labels":{"app":"nginx-deployment"}},"spec":{"replicas":1,"selector":{"matchLabels":{"app":"nginx-deployment"}},"template":{"metadata":{"creationTimestamp":null,"labels":{"app":"nginx-deployment"}},"spec":{"containers":[{"name":"nginx","image":"nginx","resources":{}}]}},"strategy":{}},"status":{}}

curl --cacert ca.crt --oauth2-bearer "$bearertoken" -X POST -H 'Content-Type: application/json' https://192.168.110.111:6443/apis/apps/v1/namespaces/default/deployments --data '{"kind":"Deployment","apiVersion":"apps/v1","metadata":{"name":"nginx-deployment","creationTimestamp":null,"labels":{"app":"nginx-deployment"}},"spec":{"replicas":1,"selector":{"matchLabels":{"app":"nginx-deployment"}},"template":{"metadata":{"creationTimestamp":null,"labels":{"app":"nginx-deployment"}},"spec":{"containers":[{"name":"nginx","image":"nginx","resources":{}}]}},"strategy":{}},"status":{}}'

#deployment 삭제
curl --cacert ca.crt --oauth2-bearer "$bearertoken" -X DELETE https://192.168.110.111:6443/apis/apps/v1/namespaces/default/deployments/nginx-deployment



#인증 다른 방법 키 사용
curl --cacert /etc/kubernetes/ssl/ca.crt --cert /etc/kubernetes/ssl/apiserver-kubelet-client.crt --key /etc/kubernetes/ssl/apiserver-kubelet-client.key \
   https://192.168.110.111:6443/api/v1/pods

참고사이트
https://itnext.io/big-change-in-k8s-1-24-about-serviceaccounts-and-their-secrets-4b909a4af4e0
https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/





root@k8s-master01:~# kubectl get apiservices.apiregistration.k8s.io
NAME                                   SERVICE                      AVAILABLE   AGE
v1.                                    Local                        True        32d
v1.admissionregistration.k8s.io        Local                        True        32d
v1.apiextensions.k8s.io                Local                        True        32d
v1.apps                                Local                        True        32d
v1.authentication.k8s.io               Local                        True        32d
v1.authorization.k8s.io                Local                        True        32d
v1.autoscaling                         Local                        True        32d
v1.batch                               Local                        True        32d
v1.certificates.k8s.io                 Local                        True        32d
v1.coordination.k8s.io                 Local                        True        32d
v1.crd.projectcalico.org               Local                        True        13d
v1.discovery.k8s.io                    Local                        True        32d
v1.events.k8s.io                       Local                        True        32d
v1.networking.k8s.io                   Local                        True        32d
v1.node.k8s.io                         Local                        True        32d
v1.policy                              Local                        True        32d
v1.rbac.authorization.k8s.io           Local                        True        32d
v1.scheduling.k8s.io                   Local                        True        32d
v1.storage.k8s.io                      Local                        True        32d
v1beta1.batch                          Local                        True        32d
v1beta1.discovery.k8s.io               Local                        True        32d
v1beta1.events.k8s.io                  Local                        True        32d
v1beta1.flowcontrol.apiserver.k8s.io   Local                        True        32d
v1beta1.metrics.k8s.io                 kube-system/metrics-server   True        14d
v1beta1.node.k8s.io                    Local                        True        32d
v1beta1.policy                         Local                        True        32d
v1beta1.storage.k8s.io                 Local                        True        32d
v1beta2.flowcontrol.apiserver.k8s.io   Local                        True        32d
v2.autoscaling                         Local                        True        32d
v2beta1.autoscaling                    Local                        True        32d
v2beta2.autoscaling                    Local                        True        32d

'Kubernetes' 카테고리의 다른 글

[K8s ] v1.25 Cgroupv2, MemoryQoS  (0) 2023.03.18
[K8s] 유용한 명령어  (0) 2023.03.08
Ceph-CSI 구성  (0) 2023.02.10
OS 옵션 변경  (0) 2022.12.26
[kubespray] 1.22 -> 1.23 Upgrade: Calico Error  (0) 2022.12.01

 

1. 사전작업

[root@sllee-ceph01 /]# ceph osd pool create kubernetes 64
pool 'kubernetes' created

[root@sllee-ceph01 /]# rbd pool init kubernetes

[root@sllee-ceph01 /]# ceph auth get-or-create client.kubernetes mon 'profile rbd' osd 'profile rbd pool=kubernetes'
[client.kubernetes]
        key = AQDBlORjNW6aDRAAEdITaRjL42YUdZ3uxmOjgg==

 

 

2. Ceph 정보확인

[root@sllee-ceph01 /]# ceph -s
  cluster:
    id:     2515bf16-1268-455a-b03b-69328a32fbc5   # ID 확인
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum sllee-ceph01,sllee-ceph02,sllee-ceph03 (age 27h)
    mgr: sllee-ceph02(active, since 27h), standbys: sllee-ceph01, sllee-ceph03
    mds: cephfs:1 {0=sllee-ceph01=up:active} 2 up:standby
    osd: 9 osds: 9 up (since 27h), 9 in (since 27h)

  data:
    pools:   10 pools, 321 pgs
    objects: 23 objects, 2.3 KiB
    usage:   9.1 GiB used, 351 GiB / 360 GiB avail
    pgs:     321 active+clean



[root@sllee-ceph01 /]# ceph fs status
cephfs - 0 clients
======
RANK  STATE       MDS          ACTIVITY     DNS    INOS
 0    active  sllee-ceph01  Reqs:    0 /s    10     13
      POOL         TYPE     USED  AVAIL
cephfs_metadata  metadata  1536k   110G
  cephfs_data      data       0    110G
STANDBY MDS
sllee-ceph03
sllee-ceph02
MDS version: ceph version 15.2.17 (8a82819d84cf884bd39c17e3236e0632ac146dc4) octopus (stable)



[root@sllee-ceph01 /]# ceph auth get client.admin
exported keyring for client.admin
[client.admin]
        key = AQCdWuRjWZt4JxAAnxNuJzkXnavYNKK7l6+KYw==     #Key 확인
        caps mds = "allow *"
        caps mgr = "allow *"
        caps mon = "allow *"
        caps osd = "allow *"

 

 

3. CephFS

[root@sllee-master01 ceph-csi]# tee >  deploy/cephfs/kubernetes/csi-config-map.yaml << EOF
---
apiVersion: v1
kind: ConfigMap
data:
  config.json: |-
    [
      {
        "clusterID": "2515bf16-1268-455a-b03b-69328a32fbc5",
        "monitors": [
          "10.10.0.21:6789",
          "10.10.0.22:6789",
          "10.10.0.23:6789"
        ],
        "cephFS" : {
          "subvolumeGroup": "cephfs"
        }
      }
    ]
metadata:
  name: ceph-csi-config
EOF




[root@sllee-master01 ceph-csi]# tee > examples/cephfs/secret.yaml << EOF
---
apiVersion: v1
kind: Secret
metadata:
  name: csi-cephfs-secret
  namespace: default
stringData:
  # Required for statically provisioned volumes
  userID: admin
  userKey: AQCdWuRjWZt4JxAAnxNuJzkXnavYNKK7l6+KYw==
 
  # Required for dynamically provisioned volumes
  adminID: admin
  adminKey: AQCdWuRjWZt4JxAAnxNuJzkXnavYNKK7l6+KYw==
EOF




[root@sllee-master01 ceph-csi]# vi examples/cephfs/storageclass.yaml
...
  clusterID: 2515bf16-1268-455a-b03b-69328a32fbc5
...
  fsName: cephfs
...


[root@sllee-master01 ceph-csi]# kubectl apply -f examples/ceph-conf.yaml


[root@sllee-master01 ceph-csi]# cd examples/cephfs/
[root@sllee-master01 cephfs]# ./plugin-deploy.sh
serviceaccount/cephfs-csi-provisioner created
clusterrole.rbac.authorization.k8s.io/cephfs-external-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/cephfs-csi-provisioner-role created
role.rbac.authorization.k8s.io/cephfs-external-provisioner-cfg created
rolebinding.rbac.authorization.k8s.io/cephfs-csi-provisioner-role-cfg created
serviceaccount/cephfs-csi-nodeplugin created
Error from server (AlreadyExists): error when creating "./csi-config-map.yaml": configmaps "ceph-csi-config" already exists
service/csi-cephfsplugin-provisioner created
deployment.apps/csi-cephfsplugin-provisioner created
daemonset.apps/csi-cephfsplugin created
service/csi-metrics-cephfsplugin created



### pods를 보면 cephfs, rbd에 대한 csi pods가 pending 상태에 있는 것을 확인 할 수 있음.
### worker1, worker2에는 모두 올라가있으니 deploy를 edit하여 replicas 개수를 3에서 2로 줄여 줌

[root@sllee-master01 cephfs]# kubectl get pod -o wide
NAME                                           READY   STATUS    RESTARTS   AGE     IP             NODE             NOMINATED NODE   READINESS GATES
csi-cephfsplugin-8ggv2                         3/3     Running   0          24m     10.20.0.45     sllee-worker02   <none>           <none>
csi-cephfsplugin-b6svc                         3/3     Running   0          24m     10.20.0.44     sllee-worker01   <none>           <none>
csi-cephfsplugin-provisioner-858dd6bb6-799p9   0/5     Pending   0          2m34s   <none>         <none>           <none>           <none>
csi-cephfsplugin-provisioner-858dd6bb6-97f9s   5/5     Running   0          24m     10.233.93.4    sllee-worker02   <none>           <none>
csi-cephfsplugin-provisioner-858dd6bb6-x9bdx   5/5     Running   0          24m     10.233.108.3   sllee-worker01   <none>           <none>



[root@sllee-master01 cephfs]# kubectl get pvc
NAME             STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS    AGE
csi-cephfs-pvc   Bound    pvc-71a28749-b06a-4247-bd00-68ba8c110d74   1Gi        RWX            csi-cephfs-sc   6s

[root@sllee-master01 cephfs]# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                    STORAGECLASS    REASON   AGE
pvc-71a28749-b06a-4247-bd00-68ba8c110d74   1Gi        RWX            Delete           Bound    default/csi-cephfs-pvc   csi-cephfs-sc            46s

 

 

 

4. RBD

[root@sllee-master01 ceph-csi]# tee > deploy/rbd/kubernetes/csi-config-map.yaml << EOF
---
apiVersion: v1
kind: ConfigMap
data:
  config.json: |-
    [
      {
        "clusterID": "2515bf16-1268-455a-b03b-69328a32fbc5",
        "monitors": [
          "10.10.0.21:6789",
          "10.10.0.22:6789",
          "10.10.0.23:6789"
        ]
      }
    ]
metadata:
  name: ceph-csi-rbd-config
 
EOF



[root@sllee-master01 ceph-csi]# tee > examples/rbd/secret.yaml << EOF
---
apiVersion: v1
kind: Secret
metadata:
  name: csi-rbd-secret
  namespace: default
stringData:
  # Key values correspond to a user name and its key, as defined in the
  # ceph cluster. User ID should have required access to the 'pool'
  # specified in the storage class
  userID: kubernetes
  userKey: AQDBlORjNW6aDRAAEdITaRjL42YUdZ3uxmOjgg==
  
  # Encryption passphrase
  encryptionPassphrase: test_passphrase
 
EOF


[root@sllee-master01 ceph-csi]# kubectl apply -f examples/rbd/secret.yaml
secret/csi-rbd-secret configured


[root@sllee-master01 ceph-csi]# vi examples/rbd/storageclass.yaml
   ...
   clusterID: 2515bf16-1268-455a-b03b-69328a32fbc5
   ...
   pool: kubernetes
   ...
   
[root@sllee-master01 ceph-csi]# kubectl create -f examples/rbd/storageclass.yaml
storageclass.storage.k8s.io/csi-rbd-sc created
   
[root@sllee-master01 ceph-csi]# kubectl apply -f examples/ceph-conf.yaml


[root@sllee-master01 ceph-csi]# cd examples/rbd/
[root@sllee-master01 rbd]# ./plugin-deploy.sh



[root@sllee-master01 rbd]# kubectl apply -f pvc.yaml
persistentvolumeclaim/rbd-pvc unchanged

[root@sllee-master01 rbd]# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                    STORAGECLASS    REASON   AGE
pvc-1fdf70ea-4fe3-4e34-938f-5b68ba151264   1Gi        RWO            Delete           Bound    default/rbd-pvc          csi-rbd-sc               14m
pvc-71a28749-b06a-4247-bd00-68ba8c110d74   1Gi        RWX            Delete           Bound    default/csi-cephfs-pvc   csi-cephfs-sc            55m

[root@sllee-master01 rbd]# kubectl get pvc
NAME             STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS    AGE
csi-cephfs-pvc   Bound    pvc-71a28749-b06a-4247-bd00-68ba8c110d74   1Gi        RWX            csi-cephfs-sc   55m
rbd-pvc          Bound    pvc-1fdf70ea-4fe3-4e34-938f-5b68ba151264   1Gi        RWO            csi-rbd-sc      14m

 

 

https://github.com/ceph/ceph-csi

'Kubernetes' 카테고리의 다른 글

[K8s] 유용한 명령어  (0) 2023.03.08
API 사용법  (0) 2023.02.21
OS 옵션 변경  (0) 2022.12.26
[kubespray] 1.22 -> 1.23 Upgrade: Calico Error  (0) 2022.12.01
[LoadBalancer] MetalLB  (0) 2022.10.28

/etc/sysctl.conf


vm.overcommit_memory=1
kernel.panic=10
kernel.panic_on_oops=1
net.ipv4.ip_forward=1
net.ipv4.ip_local_reserved_ports=30000-32767
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-arptables=1
net.bridge.bridge-nf-call-ip6tables=1

'Kubernetes' 카테고리의 다른 글

API 사용법  (0) 2023.02.21
Ceph-CSI 구성  (0) 2023.02.10
[kubespray] 1.22 -> 1.23 Upgrade: Calico Error  (0) 2022.12.01
[LoadBalancer] MetalLB  (0) 2022.10.28
[K8s] grafana Alerm list  (0) 2022.10.24

+ Recent posts