kubernetes 1.20版本->1.23版本(生产环境不建议大版本跨度较大直接进行升级)
升级过程:
etcd -> master组件(apiserver controllermanager secduler)->kubelet(涉及容器的迁移)
项目地址:
https://github.com/kubernetes/kubernetes/tree/release-1.23
一般查看大版本最大的alpha版本的改动,首先看最紧急的改动部分
然后查看其他改动部分
查看changlog,查看修改版本的改变:https://github.com/kubernetes/kubernetes/blob/release-1.23/CHANGELOG/CHANGELOG-1.23.md#v1230
etcd升级:
在Changelog since v1.22.0查看
- Upgrade etcd to 3.5.1 (#105706, @uthark) [SIG Cloud Provider, Cluster Lifecycle and Testing]
etcd升级步骤:
- 备份etcd数据
- 下载新版etcd包
- 停止etcd
- 替换etcd和etcdctl
- 启动
备份etcd数据:
export ETCDCTL_API=3
etcdctl --cacert=/etc/kubernetes/pki/etcd/etcd-ca.pem --key=/etc/kubernetes/pki/etcd/etcd-key.pem --cert=/etc/kubernetes/pki/etcd/etcd.pem --endpoints=https://192.168.1.10:2379,https://192.168.1.11:2379,https://192.168.1.12:2379 member list
etcdctl --cacert="/etc/kubernetes/pki/etcd/etcd-ca.pem" --cert="/etc/kubernetes/pki/etcd/etcd.pem" --key="/etc/kubernetes/pki/etcd/etcd-key.pem" --endpoints=https://192.168.1.10:2379 snapshot save /tmp/etcd-snapshot.db
etcdctl snapshot status /tmp/etcd-snapshot.db -wtable
下载新版etcd包
查看etcd当前版本 etcdctl version
新版etcd安装:https://github.com/etcd-io/etcd/releases(此地址安装对应版本需要使用科学上网)
ETCD_VER=v3.5.1
GOOGLE_URL=https://storage.googleapis.com/etcd
GITHUB_URL=https://github.com/etcd-io/etcd/releases/download
DOWNLOAD_URL=${GOOGLE_URL}
curl -L ${DOWNLOAD_URL}/${ETCD_VER}/etcd-${ETCD_VER}-linux-amd64.tar.gz -o /tmp/etcd-${ETCD_VER}-linux-amd64.tar.gz
ls /tmp/etcd-v3.5.1-linux-amd64.tar.gz
cd /tmp
tar -xvf etcd-v3.5.1-linux-amd64.tar.gz
cd etcd-v3.5.1-linux-amd64
ls
master:
三台master:
systemctl stop etcd.service
被停止掉的节点会显示deadline exceeded:
etcdctl --cacert="/etc/kubernetes/pki/etcd/etcd-ca.pem" --cert="/etc/kubernetes/pki/etcd/etcd.pem" --key="/etc/kubernetes/pki/etcd/etcd-key.pem" --endpoints=https://192.168.1.10:2379,https://192.168.1.11:2379,https://192.168.1.12:2379 endpoint health
master01备份并发送:
cd /usr/local/bin/ && mkdir bak && cp * bak
cp etcd /usr/local/bin/
scp /tmp/etcd-v3.5.1-linux-amd64/etcd* k8s-master02:/usr/local/bin/
scp /tmp/etcd-v3.5.1-linux-amd64/etcd* k8s-master03:/usr/local/bin/
rm -rf etcd*
cp /tmp/etcd-v3.5.1-linux-amd64/etcd* .
etcdctl version
三台master:
systemctl start etcd
systemctl status etcd
etcdctl --cacert="/etc/kubernetes/pki/etcd/etcd-ca.pem" --cert="/etc/kubernetes/pki/etcd/etcd.pem" --key="/etc/kubernetes/pki/etcd/etcd-key.pem" --endpoints=https://192.168.1.10:2379,https://192.168.1.11:2379,https://192.168.1.12:2379 endpoint health
Kubernetes升级Master节点
https://github.com/kubernetes/kubernetes/blob/release-1.23/CHANGELOG/CHANGELOG-1.23.md
tar xvf kubernetes-server-linux-amd64.tar.gz
cd kubernetes/server/bin && ls
升级kube-apiserver
systemctl stop kube-apiserver.service
mv /usr/local/bin/kube-apiserver /usr/local/bin/kube-apiserver.bak
cp -rp ./kube-apiserver /usr/local/bin/kube-apiserver
systemctl daemon-reload
systemctl restart kube-apiserver.service
/usr/local/bin/kube-apiserver --version
如果需要查看日志:
tail -f /var/log/message
升级kube-controller-manager、kube-scheduler
systemctl stop kube-controller-manager kube-scheduler
mv /usr/local/bin/kube-controller-manager /usr/local/bin/kube-controller-manager.bak && mv /usr/local/bin/kube-scheduler /usr/local/bin/kube-scheduler.bak
cp -rp ./kube-controller-manager /usr/local/bin/kube-controller-manager && cp -rp ./kube-scheduler /usr/local/bin/kube-scheduler
systemctl daemon-reload
systemctl restart kube-controller-manager kube-scheduler
升级kube-proxy
systemctl stop kube-proxy
mv /usr/local/bin/kube-proxy /usr/local/bin/kube-proxy.bak
cp -rp ./kube-proxy/usr/local/bin/kube-proxy
systemctl daemon-reload
systemctl restart kube-proxy
master01升级kubectl
mv /usr/local/bin/kubectl /usr/local/bin/kubectl.bak
cp -p ./kubectl /usr/local/bin/kubectl
kubectl version
更新master02、master03
# master01传输:
scp kube-apiserver kube-controller-manager kube-scheduler kube-proxy kubelet k8s-master02:/tmp
scp kube-apiserver kube-controller-manager kube-scheduler kube-proxy kubelet k8s-master03:/tmp
# 登陆master02 master03
systemctl stop kube-apiserver.service kube-controller-manager.service kube-scheduler.service kube-proxy.service
cp -p /tmp/kube-* /usr/local/bin/
systemctl daemon-reload
systemctl restart kube-apiserver.service kube-controller-manager.service kube-scheduler.service kube-proxy.service
Kubernetes升级Node和Calico
查看存在pod的节点(存在pod较多的节点可以选择最后升级):
kubectl get po --all-namespaces -owide
kubectl cordon k8s-master01
kubectl drain k8s-master01 --ignore-daemonsets --delete-emptydir-data
更新kubenet
systemctl stop kubelet
mv /usr/local/bin/kubelet /usr/local/bin/kubelet.bak
cp -p ./kubelet /usr/local/bin/kubelet
calico安装升级:https://docs.tigera.io/calico/latest/getting-started/kubernetes/
curl https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/calico-etcd.yaml -o calico.yaml
目前使用的calico版本:
kubectl -n kube-system edit daemonsets.apps calico-node
下载的新版本:
修改更新策略:
kubectl apply -f calico.yaml
需要删除calico的所有pod
kubectl -n kube-system delete pod calico-node-lnqbf calico-node-wqdfw calico-node-jvbpq calico-node-s5m29 calico-node-xsz97
安装后查看calico更新状态:
kubectl -n kube-system describe ds calico-node | grep Image:
恢复master01节点
systemctl start kubelet.service
kubectl uncordon k8s-master01
kubectl get node
同理,更新其他节点的kubelet,这里粗暴一些,不进行pod驱散直接更新了
# 登陆master02 master03 server01 server02
cd /usr/local/bin/
mv kubelet kubelet.bak
# master01将最近版本kubelet传送过去
scp ./kubelet k8s-master02:/usr/local/bin/kubelet
scp ./kubelet k8s-master03:/usr/local/bin/kubelet
scp ./kubelet k8s-node01:/usr/local/bin/kubelet
scp ./kubelet k8s-node02:/usr/local/bin/kubelet
# 重启master02 master03 server01 server02 kubelet服务
systemctl restart kubelet
更新完成
升级coredns
官方地址:https://github.com/coredns/coredns.git
查看当前版本:kubectl -n kube-system describe deployments.apps coredns | grep Image
查看官网chagelog建议版本
git clone https://github.com/coredns/deployment.git
cd kubernetes/
# 保存之前coredns配置
mkdir bak
kubectl -n kube-system get cm coredns -o yaml > bak/coredns-cm.yaml
kubectl -n kube-system get deployments.apps coredns -o yaml > bak/coredns-deploy.yaml
kubectl -n kube-system get clusterrole coredns -o yaml > bak/coredns-deploy.yaml
kubectl -n kube-system get clusterrole system:coredns -o yaml > bak/cr.yaml
kubectl -n kube-system get clusterrolebindings system:coredns -o yaml > bak/crb.yaml
# 使用脚本
./deploy.sh -s | kubectl apply -f -
上图deployment新版本的部署脚本有标签不匹配的地方,可以删除原先的deployment(kubectl delete -f bak/coredns-deploy.yaml),然后再用脚本创建
查看更新版本:
kubectl -n kube-system get pod coredns-6fb76d9459-dprn7 -o yaml | grep image:
将pod中的resolve.conf解析添加到宿主机上
vim /etc/resolv.conf
nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
查看是否可以解析
kubectl -n kube-system get svc kube-dns
nslookup kube-dns.kube-system
发布者:LJH,转发请注明出处:https://www.ljh.cool/38571.html