Kubeadm部署Kubernetes1.14.3集群

发布时间:2020-08-03 10:38:21 作者:羊皮裘老头
来源:网络 阅读:873

一、环境说明

  主机名       IP地址            角色          系统       
  node11    192.168.11.11   k8s-master  Centos7.6
  node12    192.168.11.12   k8s-node    Centos7.6
  node13    192.168.11.13   k8s-node    Centos7.6

注意:官方建议每台机器至少双核2G内存

以下命令在三台主机上均需运行

1、设置阿里云yum源(可选

curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo

2、安装依赖包

yum install -y epel-release conntrack ipvsadm ipset jq sysstat curl iptables libseccomp

3、关闭防火墙

systemctl stop firewalld && systemctl disable firewalld
iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat && iptables -P FORWARD ACCEPT

4、关闭SELinux

setenforce 0
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config

5、关闭swap分区

swapoff -a
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

6、设置内核参数

cat << EOF | tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
EOF

sysctl -p /etc/sysctl.d/k8s.conf

7、安装Docker

1、首先卸载旧版:

yum remove docker \
           docker-client \
           docker-client-latest \
           docker-common \
           docker-latest \
           docker-latest-logrotate \
           docker-logrotate \
           docker-selinux \
           docker-engine-selinux \
           docker-engine

2、安装依赖包:

yum install -y yum-utils device-mapper-persistent-data lvm2

3、设置安装源(阿里云)

yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

4、启用测试库(可选):

yum-config-manager --enable docker-ce-edge
yum-config-manager --enable docker-ce-test

5、安装:

yum makecache fast
yum install -y docker-ce

6、启动:

systemctl start docker

开机自启

systemctl enable docker

Docker建议配置阿里云镜像加速

安装完成后配置启动时的命令,否则docker会将iptables FORWARD chain的默认策略设置为DROP,另外Kubeadm建议将systemd设置为cgroup驱动,所以还要修改daemon.json

sed -i "13i ExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT" /usr/lib/systemd/system/docker.service

tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://bk6kzfqm.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ]
}
EOF

systemctl daemon-reload
systemctl restart docker

7、安装kubeadm和kubelet

配置源

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

yum makecache fast

安装

yum install -y kubelet kubeadm kubectl
systemctl enable kubelet

8、拉取所需镜像

先从阿里云拉取所需的镜像,不然会从谷歌拉取,导致拉取失败。

拉取镜像:

images=(kube-controller-manager-amd64 etcd-amd64 k8s-dns-sidecar-amd64 kube-proxy-amd64 kube-apiserver-amd64 kube-scheduler-amd64 pause-amd64 k8s-dns-dnsmasq-nanny-amd64 k8s-dns-kube-dns-amd64)
for imageName in ${images[@]} ; do
 docker pull champly/$imageName
 docker tag champly/$imageName gcr.io/google_containers/$imageName
 docker rmi champly/$imageName
done

修改版本

docker tag gcr.io/google_containers/etcd-amd64 gcr.io/google_containers/etcd-amd64:3.0.17 && \
docker rmi gcr.io/google_containers/etcd-amd64 && \
docker tag gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64 gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.5 && \
docker rmi gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64 && \
docker tag gcr.io/google_containers/k8s-dns-kube-dns-amd64 gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.5 && \
docker rmi gcr.io/google_containers/k8s-dns-kube-dns-amd64 && \
docker tag gcr.io/google_containers/k8s-dns-sidecar-amd64 gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.2 && \
docker rmi gcr.io/google_containers/k8s-dns-sidecar-amd64 && \
docker tag gcr.io/google_containers/kube-apiserver-amd64 gcr.io/google_containers/kube-apiserver-amd64:v1.7.5 && \
docker rmi gcr.io/google_containers/kube-apiserver-amd64 && \
docker tag gcr.io/google_containers/kube-controller-manager-amd64 gcr.io/google_containers/kube-controller-manager-amd64:v1.7.5 && \
docker rmi gcr.io/google_containers/kube-controller-manager-amd64 && \
docker tag gcr.io/google_containers/kube-proxy-amd64 gcr.io/google_containers/kube-proxy-amd64:v1.6.0 && \
docker rmi gcr.io/google_containers/kube-proxy-amd64 && \
docker tag gcr.io/google_containers/kube-scheduler-amd64 gcr.io/google_containers/kube-scheduler-amd64:v1.7.5 && \
docker rmi gcr.io/google_containers/kube-scheduler-amd64 && \
docker tag gcr.io/google_containers/pause-amd64 gcr.io/google_containers/pause-amd64:3.0 && \
docker rmi gcr.io/google_containers/pause-amd64

三、初始化集群

以下命令如无特殊说明,均在node11上执行

1、使用kubeadm init初始化集群(注意修改最后为本机IP)

kubeadm init \
  --kubernetes-version=v1.14.3 \
  --pod-network-cidr=10.244.0.0/16 \
  --apiserver-advertise-address=192.168.11.11 

初始化成功后会输出类似下面的加入命令,暂时无需运行,先记录。

kubeadm join 192.168.11.11:6443 --token 4kkgk2.ht4hnyeinhk6pwod \
    --discovery-token-ca-cert-hash sha256:8a3d03e783e82b1a066957e3311dd94fa2b76372b9c2b0bc3597a5e357ea5ca9 

2、为需要使用kubectl的用户进行配置

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

检查集群状态

kubectl get cs

3、安装Pod Network(使用七牛云镜像

curl -o kube-flannel.yml https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
sed -i "s/quay.io\/coreos\/flannel/quay-mirror.qiniu.com\/coreos\/flannel/g" kube-flannel.yml
kubectl apply -f kube-flannel.yml
rm -f kube-flannel.yml

使用下面的命令确保所有的Pod都处于Running状态,可能要等到许久。

kubectl get pod --all-namespaces -o wide

4、向Kubernetes集群中添加Node节点

在node12和node13上运行之前在node11输出的命令

kubeadm join 192.168.11.11:6443 --token 4kkgk2.ht4hnyeinhk6pwod \
    --discovery-token-ca-cert-hash sha256:8a3d03e783e82b1a066957e3311dd94fa2b76372b9c2b0bc3597a5e357ea5ca9 

查看集群中的节点状态,可能要等等许久才Ready

kubectl get nodes

5、kube-proxy开启ipvs

kubectl get configmap kube-proxy -n kube-system -o yaml > kube-proxy-configmap.yaml
sed -i 's/mode: ""/mode: "ipvs"/' kube-proxy-configmap.yaml
kubectl apply -f kube-proxy-configmap.yaml
rm -f kube-proxy-configmap.yaml
kubectl get pod -n kube-system | grep kube-proxy | awk '{system("kubectl delete pod "$1" -n kube-system")}' 

四、部署kubernetes-dashboard

1、生成访问证书

grep 'client-certificate-data' ~/.kube/config | head -n 1 | awk '{print $2}' | base64 -d >> kubecfg.crt
grep 'client-key-data' ~/.kube/config | head -n 1 | awk '{print $2}' | base64 -d >> kubecfg.key
openssl pkcs12 -export -clcerts -inkey kubecfg.key -in kubecfg.crt -out kubecfg.p12 -name "kubernetes-client"

将生成的kubecfg.p12证书导入到Windows中,直接双击打开,下一步导入即可。

注意:导入完成后需重启浏览器。

2、生成访问Token

新建文件admin-user.yaml:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kube-system

---

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system

创建角色及绑定账号

kubectl create -f admin-user.yaml

获取Token

kubectl describe secret admin-user --namespace=kube-system
    eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLWhydmxrIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJmOGE0YzY4NC04Yjc1LTExZTktYjE2ZC0wMDBjMjk5ZGViOWUiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.FftzgCCzKiWKNghPDBDqAfBJPwUgHbEJGMv5fEBEq53oV8O3vlGHmZGRqjUYHiye2qhdg084iIRDv-w03b2KroEiMX0nXYN0l73-XlEl6ecU_v7-66xiS9fDPR0JiI6SW_cyL5k16P4qIwBwk1ze99r_R0t2Q8xiplFMVW02u0zM0IG2xtT5AaXqV5uEX3kg6nThloOmxFGbyIPF743D0WEtbNicVI2YYIPM7B8CxnHZ5_9MJ5qLtjVAttomLy30O5VEgweljnaL70tja_M9DlLsBV1O8q01AFZhfBPPaNtuDrPU-OZkVb9isiMYiL92lQLEIGswWlTj-uhmSTQYGA

此次先记录下生成的Token

3、部署kubernetes-dashboard

curl -o kubernetes-dashboard.yaml https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended/kubernetes-dashboard.yaml
vi kubernetes-dashboard.yaml
修改image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1为:
image: mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.10.1    125行
kubectl apply -f kubernetes-dashboard.yaml

4、访问

https://192.168.11.11:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/

192.168.11.11为MasterIP,6443为apiserver-port

然后在登录选项中选择令牌登录,复制进刚刚生成的令牌即可。

推荐阅读:
  1. kubeadm部署k8s集群以及dashboard页面
  2. kubeadm怎样部署多节点master集群?

免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。

kubeadm k8s集群部署 uber bea

上一篇:移动互联网专线的接入有哪些实现方式

下一篇:基于Idea+Jconsole实现线程监控的方法

相关阅读

您好,登录后才能下订单哦!

密码登录
登录注册
其他方式登录
点击 登录注册 即表示同意《亿速云用户服务条款》