kubeadm部署kubernetes

这里我使用kubeadm部署的是1.13.3,在写该篇文章之时,恰好1.13.4发布了,所以索性将版本升级了。

配置互信

1
2
3
4
5
6
7
# vim /etc/hosts
192.168.100.128 Master
192.168.100.129 Node01
192.168.100.130 Node02
# ssh-keygen -t rsa -P ''
# ssh-copy-id -i .ssh/id_rsa.pub root@node01
# ssh-copy-id -i .ssh/id_rsa.pub root@node02

安装 Ansible

1
2
3
4
5
6
# yum -y install ansible
# cat /etc/ansible/hosts | grep -v ^# | grep -v ^$
[node]
node01
node02
# ansible node -m copy -a 'src=/etc/hosts dest=/etc/'

关闭 SELinux 和 Firewall

1
2
3
4
5
6
# sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
# systemctl disable firewalld && systemctl stop firewalld

# ansible node -m copy -a 'src=/etc/selinux/config dest=/etc/selinux/'
# ansible node -a 'systemctl stop firewalld'
# ansible node -a 'systemctl disable firewalld'

安装 docker

1
2
3
4
5
6
7
8
9
10
11
12
# yum install -y yum-utils device-mapper-persistent-data lvm2
# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# yum makecache fast
# yum list docker-ce --showduplicates | sort -r
# yum install -y docker-ce-18.06.1.ce-3.el7
# systemctl enable docker && systemctl start docker

# ansible node -m yum -a "state=present name=yum-utils"
# ansible node -m copy -a 'src=/etc/yum.repos.d/docker-ce.repo dest=/etc/yum.repos.d/'
# ansible node -m yum -a "state=present name=docker-ce-18.06.1.ce-3.el7"
# ansible node -a 'systemctl start docker'
# ansible node -a 'systemctl enable docker'

解压 kubernetes

1
2
3
4
5
6
7
8
9
# tar -zxvf kubernetes-server-linux-amd64.tar.gz 
# cd kubernetes/server/bin/
# docker load -i kube-apiserver.tar
# docker load -i kube-controller-manager.tar
# docker load -i kube-scheduler.tar
# docker load -i kube-proxy.tar

# ansible node -m copy -a 'src=kube-proxy.tar dest=/root'
# ansible node -m command -a "docker load -i kube-proxy.tar"

配置 kubernetes 源

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
# yum install -y kubelet kubeadm kubectl
# systemctl enable kubelet && systemctl start kubelet

# ansible node -m copy -a 'src=/etc/yum.repos.d/kubernetes.repo dest=/etc/yum.repos.d/'
# ansible node -m yum -a "state=present name=kubelet"
# ansible node -m yum -a "state=present name=kubeadm"
# ansible node -m yum -a "state=present name=kubectl"
# ansible node -a 'systemctl start kubelet'
# ansible node -a 'systemctl enable kubelet'

配置 kube-proxy 代理模式

1
2
3
4
5
6
7
8
9
10
11
# grep -v  ^# /etc/sysctl.conf 
net.ipv4.ip_forward=1
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
# sysctl -p
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1

# ansible node -m copy -a 'src=/etc/sysctl.conf dest=/etc/'
# ansible node -a 'sysctl -p'

查看需要使用的镜像

1
2
3
4
5
6
7
8
# kubeadm config images list
k8s.gcr.io/kube-apiserver:v1.13.3
k8s.gcr.io/kube-controller-manager:v1.13.3
k8s.gcr.io/kube-scheduler:v1.13.3
k8s.gcr.io/kube-proxy:v1.13.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.2.24
k8s.gcr.io/coredns:1.2.6

下载余下的镜像

请自行更改tag

1
2
3
# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.1
# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.2.6
# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd-amd64:3.2.24

启用 swap

1
2
3
# vim /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS=--fail-swap-on=false
# ansible node -m copy -a 'src=/etc/sysconfig/kubelet dest=/etc/sysconfig/'

初始化集群

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
# kubeadm init \
--kubernetes-version=v1.13.3 \
--pod-network-cidr=10.244.0.0/16 \
--apiserver-advertise-address=192.168.100.128 \
--ignore-preflight-errors=Swap
[init] Using Kubernetes version: v1.13.3
[preflight] Running pre-flight checks
[WARNING Swap]: running with swap on is not supported. Please disable swap
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [master localhost] and IPs [192.168.100.128 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [master localhost] and IPs [192.168.100.128 127.0.0.1 ::1]
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.100.128]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 47.012976 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "master" as an annotation
[mark-control-plane] Marking the node master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: mrtv9n.fdvmt32f3kkbyyjx
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

kubeadm join 192.168.100.128:6443 --token mrtv9n.fdvmt32f3kkbyyjx --discovery-token-ca-cert-hash sha256:0b5fefef7ca78df72d8d35d3b0e05511d24be0365b0b403f55c8438167606654

配置访问集群

1
2
3
4
5
6
7
8
# mkdir -p $HOME/.kube
# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
# chown $(id -u):$(id -g) $HOME/.kube/config
# kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health": "true"}

安装 flannel(删减多余的配置)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
# kubectl apply -f kube-flannel.yml
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
# kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-86c58d9df4-cm8gv 1/1 Running 0 41m
coredns-86c58d9df4-xpccv 1/1 Running 0 41m
etcd-master 1/1 Running 0 40m
kube-apiserver-master 1/1 Running 0 40m
kube-controller-manager-master 1/1 Running 0 41m
kube-flannel-ds-amd64-xz6bf 1/1 Running 0 32s
kube-proxy-29pzf 1/1 Running 0 41m
kube-scheduler-master 1/1 Running 0 41m

添加 Node 节点

1
2
3
4
5
6
7
8
9
10
11
# vim node.sh
#!/bin/bash
kubeadm join 192.168.100.128:6443 --token mrtv9n.fdvmt32f3kkbyyjx --discovery-token-ca-cert-hash sha256:0b5fefef7ca78df72d8d35d3b0e05511d24be0365b0b403f55c8438167606654 --ignore-preflight-errors=Swap
# ansible node -m copy -a 'src=/root/node.sh dest=/root/ mode=755'
# ansible node -m copy -a 'src=/root/node.sh dest=/root/ mode=755'
# ansible node -m shell -a '/root/node.sh'
# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 68m v1.13.3
node01 Ready <none> 64s v1.13.3
node02 Ready <none> 64s v1.13.3

kubeadm升级计划

检查可用于升级的版本,并验证当前群集是否可升级。要跳过互联网检查,请传入可选的[version]参数。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
# kubeadm upgrade plan 1.13.4
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.13.3
[upgrade/versions] kubeadm version: v1.13.3

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT CURRENT AVAILABLE
Kubelet 3 x v1.13.3 1.13.4

Upgrade to the latest version in the v1.13 series:

COMPONENT CURRENT AVAILABLE
API Server v1.13.3 1.13.4
Controller Manager v1.13.3 1.13.4
Scheduler v1.13.3 1.13.4
Kube Proxy v1.13.3 1.13.4
CoreDNS 1.2.6 1.2.6
Etcd 3.2.24 3.2.24

You can now apply the upgrade by executing the following command:

kubeadm upgrade apply 1.13.4

Note: Before you can perform this upgrade, you have to update kubeadm to 1.13.4.

_____________________________________________________________________

在master节点解压新版本kubernetes

1
2
3
4
5
6
7
# tar -zxvf kubernetes-server-linux-amd64.tar.gz
# cd kubernetes/server/bin/
# cp kubeadm /usr/bin/
# docker load -i kube-apiserver.tar
# docker load -i kube-controller-manager.tar
# docker load -i kube-scheduler.tar
# docker load -i kube-proxy.tar

将Kubernetes群集升级到指定版本

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
# kubeadm upgrade apply 1.13.4
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[upgrade/apply] Respecting the --cri-socket flag that is set with higher priority than the config file.
[upgrade/version] You have chosen to change the cluster version to "v1.13.4"
[upgrade/versions] Cluster version: v1.13.3
[upgrade/versions] kubeadm version: v1.13.4
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd]
[upgrade/prepull] Prepulling image for component etcd.
[upgrade/prepull] Prepulling image for component kube-apiserver.
[upgrade/prepull] Prepulling image for component kube-controller-manager.
[upgrade/prepull] Prepulling image for component kube-scheduler.
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[upgrade/prepull] Prepulled image for component kube-controller-manager.
[upgrade/prepull] Prepulled image for component kube-apiserver.
[upgrade/prepull] Prepulled image for component etcd.
[upgrade/prepull] Prepulled image for component kube-scheduler.
[upgrade/prepull] Successfully prepulled the images for all the control plane components
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.13.4"...
Static pod: kube-apiserver-master hash: b9152d72f9c05c3d3f7b4ac7268324c6
Static pod: kube-controller-manager-master hash: 8288866dd95d24b3f0eb40747d951fba
Static pod: kube-scheduler-master hash: b734fcc86501dde5579ce80285c0bf0c
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests466472581"
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-03-02-14-27-45/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-apiserver-master hash: b9152d72f9c05c3d3f7b4ac7268324c6
Static pod: kube-apiserver-master hash: 2c4b7dbda2d0962b4cc2b6c98516bf14
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-03-02-14-27-45/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-controller-manager-master hash: 8288866dd95d24b3f0eb40747d951fba
Static pod: kube-controller-manager-master hash: b6ca67226d47ac720e105375a9846904
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-03-02-14-27-45/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-scheduler-master hash: b734fcc86501dde5579ce80285c0bf0c
Static pod: kube-scheduler-master hash: 4b52d75cab61380f07c0c5a69fb371d4
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "master" as an annotation
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.13.4". Enjoy!

[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.

升级node

升级节点配置

1
2
3
4
5
# kubeadm upgrade node config --kubelet-version v1.13.4
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[upgrade] The configuration for this node was successfully updated!
[upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.

将相关节点标记为不可调度

1
2
3
4
5
6
# kubectl drain master --ignore-daemonsets
node/master cordoned
WARNING: Ignoring DaemonSet-managed pods: kube-flannel-ds-amd64-xz6bf, kube-proxy-dlck4
node/master drained
# kubectl get nodes |grep master
master Ready,SchedulingDisabled master 16d v1.13.3

更新相应软件包并解锁该节点

1
2
3
4
5
6
7
8
# cd kubernetes/server/bin/
# systemctl stop kubelet
# cp kubeadm kubectl kubelet /usr/bin/
# systemctl start kubelet
# kubectl uncordon master
node/master uncordoned
# kubectl get nodes|grep master
master Ready master 16d v1.13.4

更新其他节点

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
# kubectl drain node01 --ignore-daemonsets
node/node01 cordoned
WARNING: Ignoring DaemonSet-managed pods: kube-flannel-ds-amd64-8q4kq, kube-proxy-jvz4z
pod/coredns-86c58d9df4-97fw8 evicted
pod/rbd-provisioner-6447467945-jjz7j evicted
node/node01 evicted
# ansible node01 -a "systemctl stop kubelet"
# ansible node01 -m copy -a 'src=kubeadm dest=/usr/bin/'
# ansible node01 -m copy -a 'src=kubectl dest=/usr/bin/'
# ansible node01 -m copy -a 'src=kubelet dest=/usr/bin/'
# ansible node01 -a "systemctl start kubelet"
# kubectl uncordon node01
node/node01 uncordoned
# kubectl get nodes|grep node01
node01 Ready <none> 16d v1.13.4

# kubectl drain node02 --ignore-daemonsets
node/node02 cordoned
WARNING: Ignoring DaemonSet-managed pods: kube-flannel-ds-amd64-8zw8z, kube-proxy-2hlp7
pod/rbd-provisioner-6447467945-p2dgr evicted
pod/coredns-86c58d9df4-6fp9p evicted
node/node02 evicted
# ansible node02 -a "systemctl stop kubelet"
# ansible node02 -m copy -a 'src=kubeadm dest=/usr/bin/'
# ansible node02 -m copy -a 'src=kubectl dest=/usr/bin/'
# ansible node02 -m copy -a 'src=kubelet dest=/usr/bin/'
# ansible node02 -a "systemctl start kubelet"
# kubectl uncordon node02
node/node02 uncordoned
# kubectl get nodes|grep node02
node02 Ready <none> 16d v1.13.4
# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 16d v1.13.4
node01 Ready <none> 16d v1.13.4
node02 Ready <none> 16d v1.13.4
ZhiJian wechat
欢迎您扫一扫上面的二维码,订阅我的微信公众号!
-------------本文结束,感谢您的阅读-------------