TASK [network_plugin/calico : Check if inventory match current cluster configuration] ****************************************************************
fatal: [k8s-master01]: FAILED! => {
"assertion": "not calico_pool_conf.spec.ipipMode is defined or calico_pool_conf.spec.ipipMode == calico_ipip_mode",
"changed": false,
"evaluated_to": false,
"msg": "Your inventory doesn't match the current cluster configuration"
}
원인
- calico 모드 기본 값이 ipip에서 vxlan으로 변경
해결 방법
(1) IP in IP 모드로 사용할 경우
calico_ipip_mode: 'Always' # Possible values is `Always`, `CrossSubnet`, `Never`
calico_vxlan_mode: 'Never'
calico_network_backend: 'bird'
클라우드 제공자의 쿠버네티스 클러스터에서 로드 밸런서를 요청하면 클라우드 플랫폼이 IP 주소를 할당한다. MetlLB는 외부에서 IP 주소를 생성할 수 없으므로 사용할 수 있는 IP 주소 풀을 사용자가 제공해야 한다.
외부 발표
MetalLB가 서비스에 외부 IP주소를 할당한 후에는 클러스터 외부의 네트워크에서 해당 IP가 클러스터에 존재한다는 것을 인식하도록 해야한다. MetalLB는 사용되는 모드(ARP, NDP 또는 BGP)에 따라서 표준 네트워킹 또는 라우팅 프로토콜을 사용하여 외부 네트워크에 IP를 노출시킨다.
Layer 2 Mode (ARP/NDP)
Layer 2(데이터 링크) 모드에서는 클러스터의 한 시스템이 서비스의 소유권을 갖고 표준 주소 검색 프로토콜(IPv4일 경우는 ARP, IPv6일 경우 NDP)으로 해당 IP를 로컬 네트워크에서 연결 할 수 있도록 합니다.
BGP
BGP 모드에서 클러스터의 모든 노드는 BGP 사용자가 제어하는 근처 라우터와 피어링 세션을 만들고 해당 라우터에 트래픽을 서비스 IP로 전달하는 방법을 알려줍니다.
BGP를 사용하면 여러 노드에 걸친 로드 밸런싱과 BGP 정책 메커니즘 덕분에 세분화된 트래픽 제어가 가능합니다.
설치 및 사용
준비 사항
kube-proxy IPVS 모드를 사용하는 경우 StrictARP 모드를 활성화 한다.
root@k8s-master01:~# kubectl -n kube-system edit cm kube-proxy
[strictARP: true]로 변경
설치
root@k8s-master01:~# kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.7/config/manifests/metallb-native.yaml
namespace/metallb-system created
customresourcedefinition.apiextensions.k8s.io/addresspools.metallb.io created
customresourcedefinition.apiextensions.k8s.io/bfdprofiles.metallb.io created
customresourcedefinition.apiextensions.k8s.io/bgpadvertisements.metallb.io created
customresourcedefinition.apiextensions.k8s.io/bgppeers.metallb.io created
customresourcedefinition.apiextensions.k8s.io/communities.metallb.io created
customresourcedefinition.apiextensions.k8s.io/ipaddresspools.metallb.io created
customresourcedefinition.apiextensions.k8s.io/l2advertisements.metallb.io created
serviceaccount/controller created
serviceaccount/speaker created
role.rbac.authorization.k8s.io/controller created
role.rbac.authorization.k8s.io/pod-lister created
clusterrole.rbac.authorization.k8s.io/metallb-system:controller created
clusterrole.rbac.authorization.k8s.io/metallb-system:speaker created
rolebinding.rbac.authorization.k8s.io/controller created
rolebinding.rbac.authorization.k8s.io/pod-lister created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:controller created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:speaker created
secret/webhook-server-cert created
service/webhook-service created
deployment.apps/controller created
daemonset.apps/speaker created
validatingwebhookconfiguration.admissionregistration.k8s.io/metallb-webhook-configuration created
설치 확인
root@k8s-master01:~# kubectl -n metallb-system get all
NAME READY STATUS RESTARTS AGE
pod/controller-6c58495cbb-2l6dd 1/1 Running 0 13m
pod/speaker-gr4l9 1/1 Running 0 13m
pod/speaker-mfqkn 1/1 Running 0 13m
pod/speaker-nlddv 1/1 Running 0 13m
pod/speaker-p69vf 1/1 Running 0 13m
pod/speaker-sdk67 1/1 Running 0 13m
pod/speaker-zwnht 1/1 Running 0 13m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/webhook-service ClusterIP 10.233.13.253 <none> 443/TCP 13m
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/speaker 6 6 6 6 6 kubernetes.io/os=linux 13m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/controller 1/1 1 1 13m
NAME DESIRED CURRENT READY AGE
replicaset.apps/controller-6c58495cbb 1 1 1 13m
root@k8s-master01:~# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-service LoadBalancer 10.233.24.150 192.168.110.118 8088:30950/TCP 13m
root@k8s-master01:~# curl 192.168.110.118:8088
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
기타
이전 버전에서는 IP Address를 configmap으로 설정했는데, CR(Custom Resource) - IPAddressPool 에서 설정 하는 것으로 변경되었다.
[Critical] Master Node Master Kubelet Ready Mtrics browser > sum(up(job="kubelet",metrics_path="/metrics/probes",node="master0.p"}) when min() of query(A,5m,now) is below 3
MasterNode Ready sum(kube_node_status_condition{condition="Ready",node="master01p|master02p|master03p",status="true"}) when min() of query(A,5m,now) is below 3
MasterNode Unreachable sum(kube_node_status_condition{job="kube-state-metrics",status="true",condition="NetworkUnavailable",node="lgestgbee0.p"}) when max() of query(A,5m,now) is above 0
[Critical] Kubernetes System Pod kube-scheduler sum(kube_pod_status_ready{condition="true",pod=~"kube-scheduler-master0.*"}) when min() of query(A,5m,now) is below 3
kube-apiserver sum(kube_pod_status_ready{namespace="kube-system",pod=~"kube-apiserver-master0.*",condition="true"}) when min() of query(A,5m,now) is below 3
kube-controller-manager sum(kube_pod_status_ready{namespace="kube-system",pod=~"kube-controller-manager-master0.*",condition="true"}) when min() of query(A,5m,now) is below 3
[Critical] etcd ETCD Ready sum(kube_pod_status_ready{namespace="kube-system",pod=~"etcd-master0.*",condition="true"}) when min() of query(A,5m,now) is below 3
ETCD No Leader etcd_server_has_leader{job="kube-etcd"} when min() of query(A,5m,now) is below 1
[Critical] 서비스 PersystentVolume Error Count sum(kube_persistentvolume_status_phase{phase=~"Pending|Failed"}) when max() of query(A,5m,now) is above 0
Trident CSI Ready Count sum(kube_pod_status_ready{namespace="trident",pod=~"trident-csi.*",condition="true"}) when min() of query(A,5m,now) is below 43 *노드 수
Ingress Controller Ready kube_deployment_status_replicas_ready{deployment="ingress-ngninx-controller"} when min() of query(A,5m,now) is below 1
[Warning] kubernetes WorkerNode Ready sum(kube_node_status_condition{node=~"worker0.*|bworker.*",status="true"}) when min() of query(A,5m,now) is below 43 *노드 수
kube-proxy Ready avg(kube_pod_status_ready{namespace="kube-system",pod=~"kube-proxy-.*",condition="true"}) when min() of query(A,5m,now) is below 1
Kube Memory Pressure kube_node_status_condition{condition="MemoryPressure",status="true"} when avg() of query(A,5m,now) is above 0
Kube CPU Overcommit kube_node_status_allocatable{resource="cpu"} when max() of query(A,5m,now) is above 64
KubeClientCertificateExpiration kubelet_certificate_manager_client_expiration_renew_errors{job="kubelet",metrics_path="/metrics"} when avg() of query(A,5m,now) is above 0
root@k8s-master01:~/rook/deploy/charts/rook-ceph# helm install --create-namespace --namespace rook-ceph rook-ceph rook-release/rook-ceph -f values.yaml
NAME: rook-ceph
LAST DEPLOYED: Mon Oct 17 23:01:35 2022
NAMESPACE: rook-ceph
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The Rook Operator has been installed. Check its status by running:
kubectl --namespace rook-ceph get pods -l "app=rook-ceph-operator"
Visit https://rook.io/docs/rook/latest for instructions on how to create and configure Rook clusters
Important Notes:
- You must customize the 'CephCluster' resource in the sample manifests for your cluster.
- Each CephCluster must be deployed to its own namespace, the samples use `rook-ceph` for the namespace.
- The sample manifests assume you also installed the rook-ceph operator in the `rook-ceph` namespace.
- The helm chart includes all the RBAC required to create a CephCluster CRD in the same namespace.
- Any disk devices you add to the cluster in the 'CephCluster' must be empty (no filesystem and no partitions).
root@k8s-master01:~/rook/deploy/charts/rook-ceph-cluster# helm install --namespace rook-ceph rook-ceph-cluster --set operatorNamespace=rook-ceph rook-release/rook-ceph-cluster -f values.yaml
NAME: rook-ceph-cluster
LAST DEPLOYED: Mon Oct 17 23:12:50 2022
NAMESPACE: rook-ceph
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The Ceph Cluster has been installed. Check its status by running:
kubectl --namespace rook-ceph get cephcluster
Visit https://rook.io/docs/rook/latest/CRDs/ceph-cluster-crd/ for more information about the Ceph CRD.
Important Notes:
- You can only deploy a single cluster per namespace
- If you wish to delete this cluster and start fresh, you will also have to wipe the OSD disks using `sfdisk`
root@k8s-master01:~/rook/deploy/examples/csi/rbd# kubectl apply -f storageclass.yaml
cephblockpool.ceph.rook.io/replicapool created
storageclass.storage.k8s.io/rook-ceph-block created
root@k8s-master01:~/rook/deploy/examples# kubectl apply -f wordpress.yaml
service/wordpress created
persistentvolumeclaim/wp-pv-claim created
deployment.apps/wordpress created
root@k8s-master01:~/rook/deploy/examples# kubectl get pod
NAME READY STATUS RESTARTS AGE
wordpress-7b989dbf57-2jp9z 1/1 Running 0 6m25s
wordpress-mysql-6965fc8cc8-bhxk7 1/1 Running 0 8m25s