$ sudo kubeadm init \ --pod-network-cidr=10.10.0.0/16 \ --apiserver-advertise-address=192.168.191.144 \ --kubernetes-version=v1.23.3 [init] Using Kubernetes version: v1.23.3 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local mac-master] and IPs [10.96.0.1 192.168.191.144] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [localhost mac-master] and IPs [192.168.191.144 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [localhost mac-master] and IPs [192.168.191.144 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 4.003370 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently. [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node mac-master as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers] [mark-control-plane] Marking the node mac-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: nawjbe.tn41eyo5t8gloqgw [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
$ k version Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.3", GitCommit:"816c97ab8cff8a1c72eccca1026f7820e93e0d25", GitTreeState:"clean", BuildDate:"2022-01-25T21:25:17Z", GoVersion:"go1.17.6", Compiler:"gc", Platform:"linux/arm64"} Server Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.3", GitCommit:"816c97ab8cff8a1c72eccca1026f7820e93e0d25", GitTreeState:"clean", BuildDate:"2022-01-25T21:19:12Z", GoVersion:"go1.17.6", Compiler:"gc", Platform:"linux/arm64"}
$ k get no -owide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME mac-master NotReady control-plane,master 13m v1.23.3 192.168.191.144 <none> Ubuntu 22.04.2 LTS 5.15.0-75-generic docker://20.10.24
$ k apply -f kube-flannel.yml namespace/kube-flannel created serviceaccount/flannel created clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created configmap/kube-flannel-cfg created daemonset.apps/kube-flannel-ds created
$ k get no NAME STATUS ROLES AGE VERSION mac-master Ready control-plane,master 36m v1.23.3
$ k describe no mac-master Name: mac-master Roles: control-plane,master Labels: beta.kubernetes.io/arch=arm64 beta.kubernetes.io/os=linux kubernetes.io/arch=arm64 kubernetes.io/hostname=mac-master kubernetes.io/os=linux node-role.kubernetes.io/control-plane= node-role.kubernetes.io/master= node.kubernetes.io/exclude-from-external-load-balancers= Annotations: flannel.alpha.coreos.com/backend-data: {"VNI":1,"VtepMAC":"ee:61:5e:ff:69:8a"} flannel.alpha.coreos.com/backend-type: vxlan flannel.alpha.coreos.com/kube-subnet-manager: true flannel.alpha.coreos.com/public-ip: 192.168.191.144 kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Sat, 17 Jun 2022 08:56:06 +0000 Taints: node-role.kubernetes.io/master:NoSchedule Unschedulable: false Lease: HolderIdentity: mac-master AcquireTime: <unset> RenewTime: Sat, 17 Jun 2022 09:34:30 +0000 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- NetworkUnavailable False Sat, 17 Jun 2022 09:31:50 +0000 Sat, 17 Jun 2022 09:31:50 +0000 FlannelIsUp Flannel is running on this node MemoryPressure False Sat, 17 Jun 2022 09:32:11 +0000 Sat, 17 Jun 2022 08:56:04 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Sat, 17 Jun 2022 09:32:11 +0000 Sat, 17 Jun 2022 08:56:04 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Sat, 17 Jun 2022 09:32:11 +0000 Sat, 17 Jun 2022 08:56:04 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Sat, 17 Jun 2022 09:32:11 +0000 Sat, 17 Jun 2022 09:32:01 +0000 KubeletReady kubelet is posting ready status. AppArmor enabled Addresses: InternalIP: 192.168.191.144 Hostname: mac-master Capacity: cpu: 2 ephemeral-storage: 10218772Ki hugepages-1Gi: 0 hugepages-2Mi: 0 hugepages-32Mi: 0 hugepages-64Ki: 0 memory: 4005812Ki pods: 110 Allocatable: cpu: 2 ephemeral-storage: 9417620260 hugepages-1Gi: 0 hugepages-2Mi: 0 hugepages-32Mi: 0 hugepages-64Ki: 0 memory: 3903412Ki pods: 110 System Info: Machine ID: 672d5357bef84a6cb30d0311906a8196 System UUID: f4844d56-2a2e-5f28-7680-c77f427dbe74 Boot ID: cfde4e7a-e55c-4ee3-9c45-92511c5a78b0 Kernel Version: 5.15.0-75-generic OS Image: Ubuntu 22.04.2 LTS Operating System: linux Architecture: arm64 Container Runtime Version: docker://20.10.24 Kubelet Version: v1.23.3 Kube-Proxy Version: v1.23.3 PodCIDR: 10.10.0.0/24 PodCIDRs: 10.10.0.0/24 Non-terminated Pods: (8 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age --------- ---- ------------ ---------- --------------- ------------- --- kube-flannel kube-flannel-ds-n785t 100m (5%) 0 (0%) 50Mi (1%) 0 (0%) 3m7s kube-system coredns-64897985d-7mxbc 100m (5%) 0 (0%) 70Mi (1%) 170Mi (4%) 38m kube-system coredns-64897985d-8kk4g 100m (5%) 0 (0%) 70Mi (1%) 170Mi (4%) 38m kube-system etcd-mac-master 100m (5%) 0 (0%) 100Mi (2%) 0 (0%) 38m kube-system kube-apiserver-mac-master 250m (12%) 0 (0%) 0 (0%) 0 (0%) 38m kube-system kube-controller-manager-mac-master 200m (10%) 0 (0%) 0 (0%) 0 (0%) 38m kube-system kube-proxy-j2qrp 0 (0%) 0 (0%) 0 (0%) 0 (0%) 38m kube-system kube-scheduler-mac-master 100m (5%) 0 (0%) 0 (0%) 0 (0%) 38m Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 950m (47%) 0 (0%) memory 290Mi (7%) 340Mi (8%) ephemeral-storage 0 (0%) 0 (0%) hugepages-1Gi 0 (0%) 0 (0%) hugepages-2Mi 0 (0%) 0 (0%) hugepages-32Mi 0 (0%) 0 (0%) hugepages-64Ki 0 (0%) 0 (0%) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Starting 38m kube-proxy Normal Starting 38m kubelet Starting kubelet. Normal NodeHasSufficientMemory 38m (x4 over 38m) kubelet Node mac-master status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 38m (x3 over 38m) kubelet Node mac-master status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 38m (x3 over 38m) kubelet Node mac-master status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 38m kubelet Updated Node Allocatable limit across pods Normal Starting 38m kubelet Starting kubelet. Normal NodeAllocatableEnforced 38m kubelet Updated Node Allocatable limit across pods Normal NodeHasSufficientMemory 38m kubelet Node mac-master status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 38m kubelet Node mac-master status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 38m kubelet Node mac-master status is now: NodeHasSufficientPID Normal NodeReady 2m39s kubelet Node mac-master status is now: NodeReady
Worker
Worker 连接 Master,拉取镜像,安装网络插件,最后将 Worker 加入到集群
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
$ sudo kubeadm join 192.168.191.144:6443 --token nawjbe.tn41eyo5t8gloqgw \ --discovery-token-ca-cert-hash sha256:bb3dd37a78ac4c06c43fb38a03a897903ba803c7d211dc465d61bebd9c716411 [preflight] Running pre-flight checks [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' W0617 09:38:59.996636 6375 utils.go:69] The recommended value for "resolvConf" in "KubeletConfiguration" is: /run/systemd/resolve/resolv.conf; the provided value is: /run/systemd/resolve/resolv.conf [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.