怎样进行Kubernetes二进制部署中的单节点部署

发布时间:2021-11-18 14:46:17 作者:柒染
来源:亿速云 阅读:202

今天就跟大家聊聊有关怎样进行Kubernetes二进制部署中的单节点部署,可能很多人都不太了解,为了让大家更加了解,小编给大家总结了以下内容,希望大家根据这篇文章可以有所收获。

Kubernetes群集架构图:

怎样进行Kubernetes二进制部署中的单节点部署

各节点组件及含义:

怎样进行Kubernetes二进制部署中的单节点部署

1、Master组件

kube-apiserver

Kubernetes API,集群的统一入口,各组件协调者,以RESTful API提供接口服务,所有对象资源的增删改查和监听操作都交给APIServer处理后再提交给Etcd存储。

kube-controller-manager

处理集群中常规后台任务,一个 资源对应一个控制 器,而ControllerManager就是负责管理这些控制器的。

kube-scheduler

根据调度算法为新创建的Pod选择一个Node节点,可以任意部署,可以部署在同一个节点上,也可以部署在不同的节点上。

etcd

分布式键值存储系统。用于保存集群状态数据,比如Pod、Service 等对象信息。


2、Node组件

kubelet

kubelet是Master在Node节点上的Agent,管理本机运行容器的生命周期,比如创建容器、Pod挂载数据卷、下 载secret、获取容器和节点状态等工作。kubelet将 每个Pod转换成一组容器。

kube-proxy

在Node节点上实现Pod网络代理,维护网络规则和四层负载均衡工作。

docker或rocket

容器引擎,运行容器。


 > # 工作原理:

1、准备包含应用程序的Deployment的yml文件,然后通过kubectl客户端工具发送给ApiServer。

2、ApiServer接收到客户端的请求并将资源内容存储到数据库(etcd)中。

3、Controller组件(包括scheduler、replication、endpoint)监控资源变化并作出反应。

4、ReplicaSet检查数据库变化,创建期望数量的pod实例。

5、Scheduler再次检查数据库变化,发现尚未被分配到具体执行节点(node)的Pod,然后根据一组相关规则将pod分配到可以运行它们的节点上,并更新数据库,记录pod分配情况。

6、Kubelete监控数据库变化,管理后续pod的生命周期,发现被分配到它所在的节点上运行的那些pod。如果找到新pod,则会在该节点上运行这个新pod。

附:kuberproxy运行在集群各个主机上,管理网络通信,如服务发现、负载均衡。当有数据发送到主机时,将其路由到正确的pod或容器。对于从主机上发出的数据,它可以基于请求地址发现远程服务器,并将数据正确路由,在某些情况下会使用轮循调度算法(Round-robin)将请求发送到集群中的多个实例。


Kubernetes核心概念

1、Pod

2、Controllers

3、Service

Kubernetes集群部署

1.官方提供的三种部署方式

minikube

Minikube是一个工具,可以在本地快速运行一个单点的Kubernetes,仅用于尝试Kubernetes或日常开发的用户使用。
部署地址: https://kubernetes.io/docs/setup/minikube/

kubeadm

Kubeadm也是一个工具,提供kubeadm init和kubeadm join,用于快速部署Kubernetes集群。

部署地址:https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/

二进制包

推荐,从官方下载发行版的二进制包,手动部署每个组件,组成Kubernetes集群。

下载地址:https://github.com/kubernetes/kubernetes/releases
 

2. Kubernetes 架构图

怎样进行Kubernetes二进制部署中的单节点部署

3.自签SSL证书

组件使用的证书
etcdca.pem, server.pem, server-key.pem
flannelca.pem,server.pem, server-key.pem
kube- apiserverca.pem, server.pem, server-key.pem
kubeletca.pem, ca-key.pem
kube-proxyca.pem, kube-proxy.pem, kube-proxy-key.pem
kubectlca.pem, admin.pem, admin-key.pem

4. Etcd数据库集群部署

二进制包下载地址
https://github.com/etcd-io/etcd/releases

k8s单节点部署思路:

第一部分

1、自签ETCD证书 
2、ETCD部署 
3、Node安装docker 
4、Flannel部署(先写入子网到etcd)

第二部分(master)

1、自签APIServer证书 
2、部署APIServer组件(token,csv)
3、部署controller-manager(指定apiserver证书)和scheduler组件 

第三部分(node)

1、生成kubeconfig(bootstrap,kubeconfig和kube-proxy.kubeconfig)
2、部署kubelet组件
3、部署kube-proxy组件

第四部分(加入群集)

1、kubectl get csr && kubectl certificate approve 允许颁发证书,加入群集
2、添加一个node节点
3、查看kubectl get node 节点

k8s部署规划:

负载均衡

Nginx1:192.168.35.104/24

Nginx2:192.168.35.105/24

Master节点

master1:192.168.35.100/24

master2:192.168.35.103/24

Node节点

node1:192.168.35.101/24

node2:192.168.35.102/24

第一部分

1.1、自签ETCD证书

1.1.1、master操作:

[root@localhost ~]# mkdir k8s
[root@localhost ~]# cd k8s/
[root@localhost k8s]# ls    ##从宿主机拖进来
etcd-cert.sh  etcd.sh
[root@localhost k8s]# mkdir etcd-cert
[root@localhost k8s]# mv etcd-cert.sh etcd-cert

下面是etcd.sh脚本内容

vim etcd.sh

#!/bin/bash
# example: ./etcd.sh etcd01 192.168.1.10 etcd02=https://192.168.1.11:2380,etcd03=https://192.168.1.12:2380

ETCD_NAME=$1
ETCD_IP=$2
ETCD_CLUSTER=$3

WORK_DIR=/opt/etcd

cat <<EOF >$WORK_DIR/cfg/etcd
#[Member]
ETCD_NAME="${ETCD_NAME}"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://${ETCD_IP}:2380"
ETCD_LISTEN_CLIENT_URLS="https://${ETCD_IP}:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://${ETCD_IP}:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://${ETCD_IP}:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://${ETCD_IP}:2380,${ETCD_CLUSTER}"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF

cat <<EOF >/usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=${WORK_DIR}/cfg/etcd
ExecStart=${WORK_DIR}/bin/etcd \
--name=\${ETCD_NAME} \
--data-dir=\${ETCD_DATA_DIR} \
--listen-peer-urls=\${ETCD_LISTEN_PEER_URLS} \
--listen-client-urls=\${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
--advertise-client-urls=\${ETCD_ADVERTISE_CLIENT_URLS} \
--initial-advertise-peer-urls=\${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
--initial-cluster=\${ETCD_INITIAL_CLUSTER} \
--initial-cluster-token=\${ETCD_INITIAL_CLUSTER_TOKEN} \
--initial-cluster-state=new \
--cert-file=${WORK_DIR}/ssl/server.pem \
--key-file=${WORK_DIR}/ssl/server-key.pem \
--peer-cert-file=${WORK_DIR}/ssl/server.pem \
--peer-key-file=${WORK_DIR}/ssl/server-key.pem \
--trusted-ca-file=${WORK_DIR}/ssl/ca.pem \
--peer-trusted-ca-file=${WORK_DIR}/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable etcd
systemctl restart etcd

下面是 etcd-cert.sh脚本内容

vim etcd-cert.sh

cat > ca-config.json <<EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "www": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF

cat > ca-csr.json <<EOF
{
    "CN": "etcd CA",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing"
        }
    ]
}
EOF

cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

#-----------------------

cat > server-csr.json <<EOF
{
    "CN": "etcd",
    "hosts": [
    "10.206.240.188",
    "10.206.240.189",
    "10.206.240.111"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing"
        }
    ]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server

1.1.2、下载官方包 

[root@localhost k8s]# vim cfssl.sh

curl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /usr/local/bin/cfssl
curl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /usr/local/bin/cfssljson
curl -L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o /usr/local/bin/cfssl-certinfo
chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson /usr/local/bin/cfssl-certinfo

[root@localhost k8s]# bash cfssl.sh  //下载cfssl官方包
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  9.8M  100  9.8M    0     0  77052      0  0:02:14  0:02:14 --:--:-- 94447
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 2224k  100 2224k    0     0  66701      0  0:00:34  0:00:34 --:--:-- 71949
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 6440k  100 6440k    0     0  74368      0  0:01:28  0:01:28 --:--:-- 93942

[root@localhost k8s]# ls /usr/local/bin/
cfssl  cfssl-certinfo  cfssljson
##cfssl 生成证书工具、cfssljson通过传入json文件生成证书、cfssl-certinfo查看证书信息

1.2、ETCD部署   

1.2.1、定义证书

[root@localhost k8s]# cd etcd-cert/

cat > ca-config.json <<EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "www": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"     
        ]  
      } 
    }         
  }
}
EOF

1.2.2、实现证书签名

cat > ca-csr.json <<EOF 
{   
    "CN": "etcd CA",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing"
        }
    ]
}
EOF

1.2.3、生产证书,生成ca-key.pem  ca.pem

cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

2020/01/15 18:15:15 [INFO] generating a new CA key and certificate from CSR
2020/01/15 18:15:15 [INFO] generate received request
2020/01/15 18:15:15 [INFO] received CSR
2020/01/15 18:15:15 [INFO] generating key: rsa-2048
2020/01/15 18:15:15 [INFO] encoded CSR
2020/01/15 18:15:15 [INFO] signed certificate with serial number 661808851940283859099066838380794010566731982441

1.2.4、指定etcd三个节点之间的通信验证

cat > server-csr.json <<EOF
{
    "CN": "etcd",
    "hosts": [
    "192.168.35.100",
    "192.168.35.101",
    "192.168.35.102"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing"
        }
    ]
}
EOF

1.2.5、生成ETCD证书 server-key.pem   server.pem

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server

www server-csr.json | cfssljson -bare server
2020/01/15 18:24:09 [INFO] generate received request
2020/01/15 18:24:09 [INFO] received CSR
2020/01/15 18:24:09 [INFO] generating key: rsa-2048
2020/01/15 18:24:09 [INFO] encoded CSR
2020/01/15 18:24:09 [INFO] signed certificate with serial number 613252568370198035643630635602034323043189506463
2020/01/15 18:24:09 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").

1.2.6、复制软件包到centos7中

[root@localhost etcd-cert]# cd /root/k8s/
[root@localhost k8s]# ls              ##直接拉取到目录下
cfssl.sh   etcd.sh                          flannel-v0.10.0-linux-amd64.tar.gz
etcd-cert  etcd-v3.3.10-linux-amd64.tar.gz  kubernetes-server-linux-amd64.tar.gz

1.2.7、解压

[root@localhost k8s]# tar xf etcd-v3.3.10-linux-amd64.tar.gz
[root@localhost k8s]# ls etcd-v3.3.10-linux-amd64
Documentation  etcd  etcdctl  README-etcdctl.md  README.md  READMEv2-etcdctl.md

[root@localhost k8s]# mkdir /opt/etcd/{cfg,bin,ssl} -p    ##配置文件,命令文件,证书

[root@localhost k8s]# mv etcd-v3.3.10-linux-amd64/etcd etcd-v3.3.10-linux-amd64/etcdctl /opt/etcd/bin/

1.2.8、证书拷贝

##证书拷贝
[root@localhost k8s]# cp etcd-cert/*.pem /opt/etcd/ssl/

##进入卡住状态等待其他节点加入
[root@localhost k8s]# bash etcd.sh etcd01 192.168.35.100 etcd02=https://192.168.35.101:2380,etcd03=https://192.168.35.102:2380

1.2.9、此时打开另外一个会话,会发现etcd进程已经开启

[root@localhost etcd-cert]# ps -ef | grep etcd
root      81594      1  0 12:25 ?        00:00:18 /opt/etcd/bin/etcd --name=etcd01 --data-dir=/var/lib/etcd/default.etcd --listen-peer-urls=https://192.168.35.100:2380 --listen-client-urls=https://192.168.35.100:2379,http://127.0.0.1:2379 --advertise-client-urls=https://192.168.35.100:2379 --initial-advertise-peer-urls=https://192.168.35.100:2380 --initial-cluster=etcd01=https://192.168.35.100:2380,etcd02=https://192.168.35.101:2380,etcd03=https://192.168.35.102:2380 --initial-cluster-token=etcd-cluster --initial-cluster-state=new --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --peer-cert-file=/opt/etcd/ssl/server.pem --peer-key-file=/opt/etcd/ssl/server-key.pem --trusted-ca-file=/opt/etcd/ssl/ca.pem --peer-trusted-ca-file=/opt/etcd/ssl/ca.pem
root      82044  79438  0 13:12 pts/0    00:00:00 grep --color=auto etcd

1.2.10、在master1、node1、node2上都关闭防火墙

systemctl stop firewalld.service 
setenforce 0

1.2.11、拷贝证书去其他节点

[root@localhost k8s]# scp -r /opt/etcd/ root@192.168.35.101:/opt/
[root@localhost k8s]# scp -r /opt/etcd/ root@192.168.35.102:/opt

1.2.12、启动脚本拷贝其他节点

[root@localhost k8s]# scp /usr/lib/systemd/system/etcd.service root@192.168.35.101:/usr/lib/systemd/system/
[root@localhost k8s]# scp /usr/lib/systemd/system/etcd.service root@192.168.35.102:/usr/lib/systemd/system/

1.2.13、在node01节点修改

[root@localhost ~]# vim /opt/etcd/cfg/etcd

#[Member]
ETCD_NAME="etcd02"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.35.101:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.35.101:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.35.101:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.35.101:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.35.128:2380,etcd02=https://192.168.35.195:2380,etcd03=https://192.168.35.138:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

1.2.14、在node02节点修改

[root@localhost ~]# vim /opt/etcd/cfg/etcd

#[Member]
ETCD_NAME="etcd03"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.35.102:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.35.102:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.35.102:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.35.102:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.35.128:2380,etcd02=https://192.168.35.195:2380,etcd03=https://192.168.35.138:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

1.2.15、先在master上开启服务

[root@localhost system]# cd /root/k8s/
[root@localhost k8s]# bash etcd.sh etcd01 192.168.35.100 etcd02=https://192.168.35.101:2380,etcd03=https://192.168.35.102:2380

1.2.16、在node1和node2上开启服务

[root@localhost ~]# systemctl start etcd
[root@localhost ~]# systemctl status etcd

1.2.17、再去master查看,会发现同步成功

[root@localhost k8s]# bash etcd.sh etcd01 192.168.35.100 etcd02=https://192.168.35.101:2380,etcd03=https://192.168.35.102:2380

1.2.18、检查群集状态

[root@localhost k8s]# cd etcd-cert/
[root@localhost etcd-cert]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.35.100:2379,https://192.168.35.101:2379,https://192.168.35.102:2379" cluster-health
member 12a96220ac829a49 is healthy: got healthy result from https://192.168.35.195:2379
member 76797989afd0ecba is healthy: got healthy result from https://192.168.35.128:2379
member ff469df2baaba1da is healthy: got healthy result from https://192.168.35.138:2379
cluster is healthy

1.3、Node安装docker 

部署19版docker

1.3.1、安装依赖包

yum install -y yum-utils device-mapper-persistent-data lvm2

1.3.2、设置阿里云镜像源

yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

1.3.3、安装Docker-CE

yum install -y docker-ce

1.3.4、关闭防火墙

systemctl stop firewalld.service 
setenforce 0

1.3.5、开启服务并设为开机自启

systemctl start docker.service 
systemctl enable docker.service

1.3.6、镜像加速

tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://cz1numin.mirror.aliyuncs.com"]
}
EOF

##重载并重启服务
systemctl daemon-reload
systemctl restart docker

1.3.7、网络优化

vim /etc/sysctl.conf
net.ipv4.ip_forward=1

##重载配置、重启网卡、重启服务
sysctl -p
service network restart
systemctl restart docker

1.4、Flannel部署

1.4.1、写入分配的子网段到ETCD中,供flannel使用

[root@localhost etcd-cert]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.35.100:2379,https://192.168.35.101:2379,https://192.168.35.102:2379" set /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'
{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}

1.4.2、查看写入的信息

[root@localhost etcd-cert]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.35.100:2379,https://192.168.35.101:2379,https://192.168.35.102:2379" get /coreos.com/network/config
{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}

1.4.3、拷贝到所有node节点(只需要部署在node节点即可)

[root@localhost k8s]# scp flannel-v0.10.0-linux-amd64.tar.gz root@192.168.35.101:/root
root@192.168.35.101's password: 
flannel-v0.10.0-linux-amd64.tar.gz                 100% 9479KB  66.6MB/s   00:00    
[root@localhost k8s]# scp flannel-v0.10.0-linux-amd64.tar.gz root@192.168.35.102:/root
root@192.168.35.102's password: 
flannel-v0.10.0-linux-amd64.tar.gz                 100% 9479KB  75.7MB/s   00:00

1.4.4、所有node节点操作解压

[root@localhost ~]# tar zxvf flannel-v0.10.0-linux-amd64.tar.gz 
flanneld
mk-docker-opts.sh
README.md

1.4.5、node1操作

(1)k8s工作目录

[root@localhost ~]# ls /opt/
containerd  etcd  rh
[root@localhost ~]# mkdir /opt/kubernetes/{cfg,bin,ssl} -p
[root@localhost ~]# ls /opt/
containerd  etcd  kubernetes  rh
[root@localhost ~]# mv mk-docker-opts.sh flanneld /opt/kubernetes/bin/
[root@localhost ~]# ls /opt/kubernetes/bin/
flanneld  mk-docker-opts.sh

[root@localhost ~]# vim flannel.sh

#!/bin/bash

 

ETCD_ENDPOINTS=${1:-"http://127.0.0.1:2379"}

 

cat <<EOF >/opt/kubernetes/cfg/flanneld

 

FLANNEL_OPTIONS="--etcd-endpoints=${ETCD_ENDPOINTS} \

-etcd-cafile=/opt/etcd/ssl/ca.pem \

-etcd-certfile=/opt/etcd/ssl/server.pem \

-etcd-keyfile=/opt/etcd/ssl/server-key.pem"

 

EOF

 

cat <<EOF >/usr/lib/systemd/system/flanneld.service

[Unit]

Description=Flanneld overlay address etcd agent

After=network-online.target network.target

Before=docker.service

 

[Service]

Type=notify

EnvironmentFile=/opt/kubernetes/cfg/flanneld

ExecStart=/opt/kubernetes/bin/flanneld --ip-masq \$FLANNEL_OPTIONS

ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env

Restart=on-failure

 

[Install]

WantedBy=multi-user.target

 

EOF

 

systemctl daemon-reload

systemctl enable flanneld

systemctl restart flanneld

(2)开启flannel网络功能

[root@localhost ~]# bash flannel.sh https://192.168.35.100:2379,https://192.168.35.101:2379,https://192.168.35.102:2379
Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.

(3)配置docker连接flannel

[root@localhost ~]# vim /usr/lib/systemd/system/docker.service

[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
EnvironmentFile=/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS -H fd:// --containerd=/run/containerd/containerd.sock
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always

(4)说明:bip指定启动时的子网

[root@localhost ~]# cat /run/flannel/subnet.env
DOCKER_OPT_BIP="--bip=172.17.23.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=false"
DOCKER_OPT_MTU="--mtu=1450"
DOCKER_NETWORK_OPTIONS=" --bip=172.17.23.1/24 --ip-masq=false --mtu=1450"

(5)重启docker服务

[root@localhost ~]# systemctl daemon-reload
[root@localhost ~]# systemctl restart docker

(6)查看flannel网络

[root@localhost ~]# ifconfig
docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.23.1  netmask 255.255.255.0  broadcast 172.17.23.255
        ether 02:42:7d:9a:31:89  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.35.101  netmask 255.255.255.0  broadcast 192.168.35.255
        inet6 fe80::d4e2:ef9e:6820:145a  prefixlen 64  scopeid 0x20<link>
        inet6 fe80::2a3:b621:ca01:463e  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:01:82:fd  txqueuelen 1000  (Ethernet)
        RX packets 1148980  bytes 1135571843 (1.0 GiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 594986  bytes 64240523 (61.2 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 172.17.23.0  netmask 255.255.255.255  broadcast 0.0.0.0
        inet6 fe80::58e3:97ff:fe97:28f2  prefixlen 64  scopeid 0x20<link>
        ether 5a:e3:97:97:28:f2  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 141 overruns 0  carrier 0  collisions 0

(7)创建容器

[root@localhost ~]# docker run -it centos:7 /bin/bash
Unable to find image 'centos:7' locally
7: Pulling from library/centos
ab5ef0e58194: Pull complete 
Digest: sha256:4a701376d03f6b39b8c2a8f4a8e499441b0d567f9ab9d58e4991de4472fb813c
Status: Downloaded newer image for centos:7
[root@809ba7f7fdbe /]#  yum install net-tools -y               #安装工具后才能使用ifconfig命令

[root@809ba7f7fdbe /]# ifconfig 
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 172.17.23.2  netmask 255.255.255.0  broadcast 172.17.23.255
        ether 02:42:ac:11:17:02  txqueuelen 0  (Ethernet)
        RX packets 17384  bytes 13962588 (13.3 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 5915  bytes 323222 (315.6 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        loop  txqueuelen 1  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

1.4.6、node2操作(和上述node1操作一样)

注释:这里过程就省略了,只看下重要的结果

(1)bip指定启动时的子网

[root@localhost ~]# vim /usr/lib/systemd/system/docker.service
[root@localhost ~]# cat /run/flannel/subnet.env
DOCKER_OPT_BIP="--bip=172.17.56.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=false"
DOCKER_OPT_MTU="--mtu=1450"
DOCKER_NETWORK_OPTIONS=" --bip=172.17.56.1/24 --ip-masq=false --mtu=1450"

(2)查看flannel网络

[root@localhost ~]# ifconfig 

docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.56.1  netmask 255.255.255.0  broadcast 172.17.56.255
        ether 02:42:4b:66:50:5b  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 172.17.56.0  netmask 255.255.255.255  broadcast 0.0.0.0
        inet6 fe80::50f6:daff:fe4e:af02  prefixlen 64  scopeid 0x20<link>
        ether 52:f6:da:4e:af:02  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 106 overruns 0  carrier 0  collisions 0

(3)创建容器

[root@localhost ~]# docker run -it centos:7 /bin/bash
Unable to find image 'centos:7' locally
7: Pulling from library/centos
ab5ef0e58194: Pull complete 
Digest: sha256:4a701376d03f6b39b8c2a8f4a8e499441b0d567f9ab9d58e4991de4472fb813c
Status: Downloaded newer image for centos:7
[root@5d34e9d6766e /]# yum install net-tools -y

[root@5d34e9d6766e /]# ifconfig 
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 172.17.56.2  netmask 255.255.255.0  broadcast 172.17.56.255
        ether 02:42:ac:11:38:02  txqueuelen 0  (Ethernet)
        RX packets 17199  bytes 13954199 (13.3 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 6723  bytes 367133 (358.5 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        loop  txqueuelen 1  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

1.4.7、测试ping通对方docker0网卡 证明flannel起到路由作用

[root@localhost ~]# ping 172.17.56.1
PING 172.17.56.1 (172.17.56.1) 56(84) bytes of data.
64 bytes from 172.17.56.1: icmp_seq=1 ttl=64 time=0.307 ms
64 bytes from 172.17.56.1: icmp_seq=2 ttl=64 time=0.385 ms
64 bytes from 172.17.56.1: icmp_seq=3 ttl=64 time=0.302 ms
1.4.8、再次测试ping通两个node中的centos:7容器
[root@809ba7f7fdbe /]# ping 172.17.56.2
PING 172.17.56.2 (172.17.56.2) 56(84) bytes of data.
64 bytes from 172.17.56.2: icmp_seq=1 ttl=62 time=0.487 ms
64 bytes from 172.17.56.2: icmp_seq=2 ttl=62 time=0.273 ms
64 bytes from 172.17.56.2: icmp_seq=3 ttl=62 time=0.234 ms

第二部分(master)

2.1、自签APIServer证书

2.1.1、在master上操作,api-server生成证书

[root@localhost k8s]# ls
etcd-cert                 etcd-v3.3.10-linux-amd64.tar.gz       master.zip
etcd.sh                   flannel-v0.10.0-linux-amd64.tar.gz
etcd-v3.3.10-linux-amd64  kubernetes-server-linux-amd64.tar.gz
[root@localhost k8s]# unzip master.zip
Archive:  master.zip
  inflating: apiserver.sh            
  inflating: controller-manager.sh   
  inflating: scheduler.sh            
[root@localhost k8s]# mkdir /opt/kubernetes/{cfg,bin,ssl} -p
[root@localhost k8s]# mkdir k8s-cert                     #apiserver自签证书目录
[root@localhost k8s]# cd k8s-cert/
[root@localhost k8s-cert]# ls
k8s-cert.sh

//k8s-cert.sh脚本

cat > ca-config.json <<EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF

cat > ca-csr.json <<EOF
{
    "CN": "kubernetes",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
              "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

#-----------------------

cat > server-csr.json <<EOF
{
    "CN": "kubernetes",
    "hosts": [
      "10.0.0.1",
      "127.0.0.1",
      "192.168.35.100",                //master1
      "192.168.35.103",                //master2
      "192.168.35.200",                //vip
      "192.168.35.104",                //nginx1(master)
      "192.168.35.105",                //nginx2(backup)
      "kubernetes",
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server

#-----------------------

cat > admin-csr.json <<EOF
{
  "CN": "admin",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin

#-----------------------

cat > kube-proxy-csr.json <<EOF
{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

2.1.2、生成k8s证书

[root@localhost k8s-cert]#  bash k8s-cert.sh
2020/02/05 15:04:48 [INFO] generating a new CA key and certificate from CSR
2020/02/05 15:04:48 [INFO] generate received request
2020/02/05 15:04:48 [INFO] received CSR
2020/02/05 15:04:48 [INFO] generating key: rsa-2048
2020/02/05 15:04:49 [INFO] encoded CSR
2020/02/05 15:04:49 [INFO] signed certificate with serial number 213037901624270895781178406733487458634913677457
2020/02/05 15:04:49 [INFO] generate received request
2020/02/05 15:04:49 [INFO] received CSR
2020/02/05 15:04:49 [INFO] generating key: rsa-2048
2020/02/05 15:04:49 [INFO] encoded CSR
2020/02/05 15:04:49 [INFO] signed certificate with serial number 459900166635201812257861637517291405808825101980
2020/02/05 15:04:49 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
2020/02/05 15:04:49 [INFO] generate received request
2020/02/05 15:04:49 [INFO] received CSR
2020/02/05 15:04:49 [INFO] generating key: rsa-2048
2020/02/05 15:04:49 [INFO] encoded CSR
2020/02/05 15:04:49 [INFO] signed certificate with serial number 229556723754610484918666718008179982380721010192
2020/02/05 15:04:49 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
2020/02/05 15:04:49 [INFO] generate received request
2020/02/05 15:04:49 [INFO] received CSR
2020/02/05 15:04:49 [INFO] generating key: rsa-2048
2020/02/05 15:04:49 [INFO] encoded CSR
2020/02/05 15:04:49 [INFO] signed certificate with serial number 435030710152838930041815329750301323468065876772
2020/02/05 15:04:49 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").

[root@localhost k8s-cert]# ls *pem
admin-key.pem  ca-key.pem  kube-proxy-key.pem  server-key.pem
admin.pem      ca.pem      kube-proxy.pem      server.pem
[root@localhost k8s-cert]# cp ca*pem server*pem /opt/kubernetes/ssl/

2.1.3、解压kubernetes压缩包

[root@localhost k8s-cert]# cd ..
[root@localhost k8s]# ls
apiserver.sh                     flannel-v0.10.0-linux-amd64.tar.gz
controller-manager.sh            k8s-cert
etcd-cert                        kubernetes-server-linux-amd64.tar.gz
etcd.sh                          master.zip
etcd-v3.3.10-linux-amd64         scheduler.sh
etcd-v3.3.10-linux-amd64.tar.gz
[root@localhost k8s]# tar zxvf kubernetes-server-linux-amd64.tar.gz

2.1.4、复制关键命令文件

[root@localhost k8s]# cd /root/k8s/kubernetes/server/bin/
[root@localhost bin]# ls
apiextensions-apiserver              kube-controller-manager.tar
cloud-controller-manager             kubectl
cloud-controller-manager.docker_tag  kubelet
cloud-controller-manager.tar         kube-proxy
hyperkube                            kube-proxy.docker_tag
kubeadm                              kube-proxy.tar
kube-apiserver                       kube-scheduler
kube-apiserver.docker_tag            kube-scheduler.docker_tag
kube-apiserver.tar                   kube-scheduler.tar
kube-controller-manager              mounter
kube-controller-manager.docker_tag
[root@localhost bin]# cp kube-apiserver kubectl kube-controller-manager kube-scheduler /opt/kubernetes/bin/

2.2、部署APIServer组件(token,csv)

2.2.1、随机生成序列号

[root@localhost k8s]# head -c 16 /dev/urandom | od -An -t x | tr -d ' '
b639a2717c79839fd7fab7bac97dca32               #随机生成的序列号
[root@localhost k8s]# vim /opt/kubernetes/cfg/token.csv

b639a2717c79839fd7fab7bac97dca32,kubelet-bootstrap,10001,"system:kubelet-bootstrap"

                     ##序列号                                      用户名                id                         角色

2.2.2、二进制文件,token,证书都准备好,开启apiserver

[root@localhost k8s]#  bash apiserver.sh 192.168.35.100 https://192.168.35.100:2379,https://192.168.35.101:2379,https://192.168.35.102:2379
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.

2.2.3、检查进程是否启动成功

[root@localhost k8s]# ps aux | grep kube
root      83786 92.7 14.4 358828 269812 ?       Ssl  15:56   0:06 /opt/kubernetes/bin/kube-apiserver --logtostderr=true --v=4 --etcd-servers=https://192.168.35.100:2379,https://192.168.35.101:2379,https://192.168.35.102:2379 --bind-address=192.168.35.100 --secure-port=6443 --advertise-address=192.168.35.100 --allow-privileged=true --service-cluster-ip-range=10.0.0.0/24 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction --authorization-mode=RBAC,Node --kubelet-https=true --enable-bootstrap-token-auth --token-auth-file=/opt/kubernetes/cfg/token.csv --service-node-port-range=30000-50000 --tls-cert-file=/opt/kubernetes/ssl/server.pem --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem --client-ca-file=/opt/kubernetes/ssl/ca.pem --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem --etcd-cafile=/opt/etcd/ssl/ca.pem --etcd-certfile=/opt/etcd/ssl/server.pem --etcd-keyfile=/opt/etcd/ssl/server-key.pem
root      83794  0.0  0.0 112676   980 pts/0    S+   15:56   0:00 grep --color=auto kube

2.2.4、查看配置文件

[root@localhost k8s]# cat /opt/kubernetes/cfg/kube-apiserver

KUBE_APISERVER_OPTS="--logtostderr=true \
--v=4 \
--etcd-servers=https://192.168.35.100:2379,https://192.168.35.101:2379,https://192.168.35.102:2379 \
--bind-address=192.168.35.100 \
--secure-port=6443 \
--advertise-address=192.168.35.100 \
--allow-privileged=true \
--service-cluster-ip-range=10.0.0.0/24 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--kubelet-https=true \
--enable-bootstrap-token-auth \
--token-auth-file=/opt/kubernetes/cfg/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/opt/kubernetes/ssl/server.pem  \
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \
--client-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/opt/etcd/ssl/ca.pem \
--etcd-certfile=/opt/etcd/ssl/server.pem \
--etcd-keyfile=/opt/etcd/ssl/server-key.pem"

2.2.5、监听的https端口

[root@localhost k8s]# netstat -ntap | grep 6443
tcp        0      0 192.168.35.100:6443     0.0.0.0:*               LISTEN      83786/kube-apiserve 
tcp        0      0 192.168.35.100:44312    192.168.35.100:6443     ESTABLISHED 83786/kube-apiserve 
tcp        0      0 192.168.35.100:6443     192.168.35.100:44312    ESTABLISHED 83786/kube-apiserve 
[root@localhost k8s]# netstat -ntap | grep 8080
tcp        0      0 127.0.0.1:8080          0.0.0.0:*               LISTEN      83786/kube-apiserve

2.3、部署controller-manager(指定apiserver证书)和scheduler组件

2.3.1、启动scheduler服务

[root@localhost k8s]# ./scheduler.sh 127.0.0.1
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.
[root@localhost k8s]# ps aux | grep ku
postfix   83202  0.0  0.2  91732  4008 ?        S    15:10   0:00 pickup -l -t unix -u
root      83786  4.3 18.3 427420 342812 ?       Ssl  15:56   0:12 /opt/kubernetes/bin/kube-apiserver --logtostderr=true --v=4 --etcd-servers=https://192.168.35.100:2379,https://192.168.35.101:2379,https://192.168.35.102:2379 --bind-address=192.168.35.100 --secure-port=6443 --advertise-address=192.168.35.100 --allow-privileged=true --service-cluster-ip-range=10.0.0.0/24 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction --authorization-mode=RBAC,Node --kbelet-https=true --enable-bootstrap-token-auth --token-auth-file=/opt/kubernetes/cfg/token.csv --service-node-port-range=30000-50000 --tls-cert-file=/opt/kubernetes/ssl/server.pem --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem --client-ca-file=/opt/kubernetes/ssl/ca.pem --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem --etcd-cafile=/opt/etcd/ssl/ca.pem --etcd-certfile=/opt/etcd/ssl/server.pem --etcd-keyfile=/opt/etcd/ssl/server-key.pem
root      83904  0.8  1.0  45360 20356 ?        Ssl  16:01   0:00 /opt/kubernetes/bin/kube-scheduler --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect
root      83912  0.0  0.0 112676   984 pts/0    R+   16:01   0:00 grep --color=auto ku

2.3.2、启动controller-manager

[root@localhost k8s]#  chmod +x controller-manager.sh
[root@localhost k8s]# ./controller-manager.sh 127.0.0.1
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.

2.3.3、查看master 节点状态

[root@localhost k8s]#  /opt/kubernetes/bin/kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-2               Healthy   {"health":"true"}   
etcd-1               Healthy   {"health":"true"}   
etcd-0               Healthy   {"health":"true"}

第三部分(node)

 3.1、生成kubeconfig(bootstrap,kubeconfig和kube-proxy.kubeconfig)

master上操作

3.1.1、把 kubelet、kube-proxy拷贝到node节点上去

[root@localhost k8s]# cd kubernetes/server/bin/
[root@localhost bin]# ls
apiextensions-apiserver              kube-controller-manager.tar
cloud-controller-manager             kubectl
cloud-controller-manager.docker_tag  kubelet
cloud-controller-manager.tar         kube-proxy
hyperkube                            kube-proxy.docker_tag
kubeadm                              kube-proxy.tar
kube-apiserver                       kube-scheduler
kube-apiserver.docker_tag            kube-scheduler.docker_tag
kube-apiserver.tar                   kube-scheduler.tar
kube-controller-manager              mounter
kube-controller-manager.docker_tag
[root@localhost bin]# scp kubelet kube-proxy root@192.168.35.101:/opt/kubernetes/bin/ 
root@192.168.35.101's password: 
kubelet                                            100%  168MB  87.4MB/s   00:01    
kube-proxy                                         100%   48MB  90.9MB/s   00:00    
[root@localhost bin]# scp kubelet kube-proxy root@192.168.35.102:/opt/kubernetes/bin/ 
root@192.168.35.102's password: 
kubelet                                            100%  168MB  93.4MB/s   00:01    
kube-proxy                                         100%   48MB 100.5MB/s   00:00

nod01节点操作

3.1.2、复制node.zip到/root目录下再解压

[root@localhost ~]# ls
anaconda-ks.cfg                     initial-setup-ks.cfg  公共  图片  音乐
flannel.sh                          node.zip              模板  文档  桌面
flannel-v0.10.0-linux-amd64.tar.gz  README.md             视频  下载
[root@localhost ~]# unzip node.zip 
Archive:  node.zip
  inflating: proxy.sh                
  inflating: kubelet.sh

master操作

3.1.3、创建工作目录

[root@localhost k8s]#  mkdir kubeconfig
[root@localhost k8s]# cd kubeconfig/
[root@localhost kubeconfig]# ls
kubeconfig.sh               

kubeconfig.sh脚本如下 :   

# 创建 TLS Bootstrapping Token
#BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')
BOOTSTRAP_TOKEN=0fb61c46f8991b718eb38d27b605b008

cat > token.csv <<EOF
${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF

#----------------------                                 #此部分删掉

APISERVER=$1
SSL_DIR=$2

# 创建kubelet bootstrapping kubeconfig 
export KUBE_APISERVER="https://$APISERVER:6443"

# 设置集群参数
kubectl config set-cluster kubernetes \
  --certificate-authority=$SSL_DIR/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=bootstrap.kubeconfig

# 设置客户端认证参数
kubectl config set-credentials kubelet-bootstrap \
  --token=${BOOTSTRAP_TOKEN} \
  --kubeconfig=bootstrap.kubeconfig

# 设置上下文参数
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kubelet-bootstrap \
  --kubeconfig=bootstrap.kubeconfig

# 设置默认上下文
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig

#----------------------

# 创建kube-proxy kubeconfig文件

kubectl config set-cluster kubernetes \
  --certificate-authority=$SSL_DIR/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config set-credentials kube-proxy \
  --client-certificate=$SSL_DIR/kube-proxy.pem \
  --client-key=$SSL_DIR/kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

3.1.2、获取token信息

[root@localhost kubeconfig]# cat /opt/kubernetes/cfg/token.csv
b639a2717c79839fd7fab7bac97dca32,kubelet-bootstrap,10001,"system:kubelet-bootstr

3.1.3、配置文件修改为tokenID

[root@localhost kubeconfig]# vim kubeconfig.sh

# 设置客户端认证参数
kubectl config set-credentials kubelet-bootstrap \
  --token=b639a2717c79839fd7fab7bac97dca32 \
  --kubeconfig=bootstrap.kubeconfig

3.1.4、设置环境变量(可以写入到/etc/profile中)

[root@localhost kubeconfig]# vim /etc/profile

//在末行插入

export PATH=$PATH:/opt/kubernetes/bin/

[root@localhost kubeconfig]# source /etc/profile                #声明

[root@localhost kubeconfig]# kubectl get cs              #查看状态
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-1               Healthy   {"health":"true"}   
etcd-2               Healthy   {"health":"true"}   
etcd-0               Healthy   {"health":"true"}

3.1.5、重命名并生成配置文件

[root@localhost kubeconfig]# mv kubeconfig.sh kubeconfig
[root@localhost kubeconfig]# ls
kubeconfig
[root@localhost kubeconfig]# vim kubeconfig 
[root@localhost kubeconfig]# bash kubeconfig 192.168.35.100 /root/k8s/k8s-cert/
Cluster "kubernetes" set.
User "kubelet-bootstrap" set.
Context "default" created.
Switched to context "default".
Cluster "kubernetes" set.
User "kube-proxy" set.
Context "default" created.
Switched to context "default".

[root@localhost kubeconfig]# ls
bootstrap.kubeconfig  kubeconfig  kube-proxy.kubeconfig

3.1.6、拷贝配置文件到node节点

[root@localhost kubeconfig]# scp bootstrap.kubeconfig kube-proxy.kubeconfig root@192.168.35.101:/opt/kubernetes/cfg/
root@192.168.35.101's password: 
bootstrap.kubeconfig                               100% 2168     2.7MB/s   00:00    
kube-proxy.kubeconfig                              100% 6270     5.1MB/s   00:00    
[root@localhost kubeconfig]# scp bootstrap.kubeconfig kube-proxy.kubeconfig root@192.168.35.102:/opt/kubernetes/cfg/
root@192.168.35.102's password: 
bootstrap.kubeconfig                               100% 2168     1.8MB/s   00:00    
kube-proxy.kubeconfig                              100% 6270     5.9MB/s   00:00

3.1.7、创建bootstrap角色赋予权限用于连接apiserver请求签名(关键)

[root@localhost kubeconfig]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created

3.2、部署kubelet组件

在node01节点上操作

3.2.1、执行脚本

[root@localhost ~]# bash kubelet.sh 192.168.35.101
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.

3.2.2、检查kubelet服务启动

[root@localhost ~]# ps aux | grep kube
root      85079  0.0  0.8 300512 16424 ?        Ssl  13:49   0:02 /opt/kubernetes/bin/flanneld --ip-masq --etcd-endpoints=https://192.168.35.100:2379,https://192.168.35.101:2379,https://192.168.35.102:2379 -etcd-cafile=/opt/etcd/ssl/ca.pem -etcd-certfile=/opt/etcd/ssl/server.pem -etcd-keyfile=/opt/etcd/ssl/server-key.pem
root     109665  0.6  2.3 370688 42948 ?        Ssl  18:46   0:00 /opt/kubernetes/bin/kubelet --logtostderr=true --v=4 --hostname-override=192.168.35.101 --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig --bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig --config=/opt/kubernetes/cfg/kubelet.config --cert-dir=/opt/kubernetes/ssl --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0
root     109762  0.0  0.0 112676   980 pts/0    S+   18:47   0:00 grep --color=auto kube

 3.2.3、查看服务器的状况

[root@localhost ~]# systemctl status kubelet.service 
● kubelet.service - Kubernetes Kubelet
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
   Active: active (running) since 三 2020-02-05 18:46:35 CST; 3min 1s ago
 Main PID: 109665 (kubelet)
   Memory: 61.6M
   CGroup: /system.slice/kubelet.service
           └─109665 /opt/kubernetes/bin/kubelet --logtostderr=true --v=4 --hostnam...

2月 05 18:46:35 localhost.localdomain kubelet[109665]: I0205 18:46:35.374771  109...
2月 05 18:46:35 localhost.localdomain kubelet[109665]: I0205 18:46:35.601115  109...
2月 05 18:46:35 localhost.localdomain kubelet[109665]: I0205 18:46:35.619113  109...
2月 05 18:46:35 localhost.localdomain kubelet[109665]: I0205 18:46:35.619153  109...
2月 05 18:46:35 localhost.localdomain kubelet[109665]: I0205 18:46:35.619192  109...
2月 05 18:46:35 localhost.localdomain kubelet[109665]: I0205 18:46:35.619229  109...
2月 05 18:46:35 localhost.localdomain kubelet[109665]: I0205 18:46:35.619294  109...
2月 05 18:46:35 localhost.localdomain kubelet[109665]: I0205 18:46:35.619302  109...
2月 05 18:46:35 localhost.localdomain kubelet[109665]: I0205 18:46:35.620575  109...
2月 05 18:46:35 localhost.localdomain kubelet[109665]: I0205 18:46:35.622388  109...
Hint: Some lines were ellipsized, use -l to show in full.

master上操作

3.2.4、检查到node1节点的请求

[root@localhost kubeconfig]# kubectl get csr
NAME                                                   AGE    REQUESTOR           CONDITION
node-csr-HF551kae9hkzNGwirEbd2uAiStXnawpEI-y7nLP4sTU   5m9s   kubelet-bootstrap   Pending

Pending(等待集群给该节点颁发证书)

3.2.5、同意请求并颁发证书

[root@localhost kubeconfig]# kubectl certificate approve node-csr-HF551kae9hkzNGwirEbd2uAiStXnawpEI-y7nLP4sTU
certificatesigningrequest.certificates.k8s.io/node-csr-HF551kae9hkzNGwirEbd2uAiStXnawpEI-y7nLP4sTU approved

3.2.6、继续查看证书状态

[root@localhost kubeconfig]# kubectl get csr
NAME                                                   AGE     REQUESTOR           CONDITION
node-csr-HF551kae9hkzNGwirEbd2uAiStXnawpEI-y7nLP4sTU   9m18s   kubelet-bootstrap   Approved,Issued

Approved,Issued(已经被允许加入群集)

 3.2.7、查看群集节点,成功加入node01节点

[root@localhost kubeconfig]# kubectl get node
NAME             STATUS   ROLES    AGE     VERSION
192.168.35.101   Ready    <none>   2m18s   v1.12.3

3.3、部署kube-proxy组件

在node1节点操作

3.3.1、启动proxy服务

[root@localhost ~]# bash proxy.sh 192.168.35.101
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
[root@localhost ~]# systemctl status kube-proxy.service 
● kube-proxy.service - Kubernetes Proxy
   Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)
   Active: active (running) since 三 2020-02-05 18:59:58 CST; 33s ago
 Main PID: 111731 (kube-proxy)
   Memory: 8.9M
   CGroup: /system.slice/kube-proxy.service
           ‣ 111731 /opt/kubernetes/bin/kube-proxy --logtostderr=true --v=4 --host...

2月 05 19:00:28 localhost.localdomain kube-proxy[111731]: I0205 19:00:28.692738  ...
2月 05 19:00:28 localhost.localdomain kube-proxy[111731]: I0205 19:00:28.695213  ...
2月 05 19:00:28 localhost.localdomain kube-proxy[111731]: I0205 19:00:28.713256  ...
2月 05 19:00:28 localhost.localdomain kube-proxy[111731]: I0205 19:00:28.724431  ...
2月 05 19:00:28 localhost.localdomain kube-proxy[111731]: I0205 19:00:28.727449  ...
2月 05 19:00:28 localhost.localdomain kube-proxy[111731]: I0205 19:00:28.727463  ...
2月 05 19:00:29 localhost.localdomain kube-proxy[111731]: I0205 19:00:29.754834  ...
2月 05 19:00:29 localhost.localdomain kube-proxy[111731]: I0205 19:00:29.799929  ...
2月 05 19:00:31 localhost.localdomain kube-proxy[111731]: I0205 19:00:31.754579  ...
2月 05 19:00:31 localhost.localdomain kube-proxy[111731]: I0205 19:00:31.797581  ...
Hint: Some lines were ellipsized, use -l to show in full.

3.3.2、把现成的/opt/kubernetes目录复制到其他节点进行修改即可

[root@localhost ~]# scp -r /opt/kubernetes/ root@192.168.35.102:/opt/
The authenticity of host '192.168.35.102 (192.168.35.102)' can't be established.
ECDSA key fingerprint is SHA256:JsLSnAul/dncM/HPvpJWWB09dHLzpIfArHv1fWjQyA8.
ECDSA key fingerprint is MD5:d1:b7:d7:74:c6:4a:2a:7b:fc:33:8c:9c:3a:f2:6e:8a.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.35.102' (ECDSA) to the list of known hosts.
root@192.168.35.102's password: 
flanneld                                           100%  238   100.8KB/s   00:00    
bootstrap.kubeconfig                               100% 2168     5.8MB/s   00:00    
kube-proxy.kubeconfig                              100% 6270    11.5MB/s   00:00    
kubelet                                            100%  378   885.4KB/s   00:00    
kubelet.config                                     100%  268   812.9KB/s   00:00    
kubelet.kubeconfig                                 100% 2297     5.7MB/s   00:00    
kube-proxy                                         100%  190   546.5KB/s   00:00    
mk-docker-opts.sh                                  100% 2139     3.5MB/s   00:00    
scp: /opt//kubernetes/bin/flanneld: Text file busy
kubelet                                            100%  168MB 116.0MB/s   00:01    
kube-proxy                                         100%   48MB 122.2MB/s   00:00    
kubelet.crt                                        100% 2193   729.3KB/s   00:00    
kubelet.key                                        100% 1675     3.5MB/s   00:00    
kubelet-client-2020-02-05-18-55-07.pem             100% 1277   488.0KB/s   00:00    
kubelet-client-current.pem                         100% 1277   423.3KB/s   00:00

3.3.3、把kubelet,kube-proxy的service文件拷贝到node2中

[root@localhost ~]# scp /usr/lib/systemd/system/{kubelet,kube-proxy}.service root@192.168.35.102:/usr/lib/systemd/system/
root@192.168.35.102's password: 
kubelet.service                                    100%  264   113.9KB/s   00:00    
kube-proxy.service                                 100%  231   529.7KB/s   00:00

第四部分(加入群集)

4.1、在node02上操作,进行修改

4.1.11、首先删除复制过来的证书,等会node02会自行申请证书

[root@localhost ~]# cd /opt/kubernetes/ssl/
[root@localhost ssl]# rm -rf *

4.1.2、修改配置文件kubelet  kubelet.config kube-proxy(三个配置文件)

[root@localhost ssl]# cd ../cfg/
[root@localhost cfg]# ls
bootstrap.kubeconfig  kubelet         kubelet.kubeconfig  kube-proxy.kubeconfig
flanneld              kubelet.config  kube-proxy

[root@localhost cfg]# vim kubelet

KUBELET_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.35.102 \
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
--config=/opt/kubernetes/cfg/kubelet.config \
--cert-dir=/opt/kubernetes/ssl \
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"

[root@localhost cfg]# vim kubelet.config 

kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 192.168.35.102
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- 10.0.0.2
clusterDomain: cluster.local.
failSwapOn: false
authentication:
  anonymous:
    enabled: true

[root@localhost cfg]# vim kube-proxy

KUBE_PROXY_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.35.102 \
--cluster-cidr=10.0.0.0/24 \
--proxy-mode=ipvs \
--kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig"

4.1.3、启动服务

[root@localhost cfg]#  systemctl start kubelet.service
[root@localhost cfg]#  systemctl enable kubelet.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
[root@localhost cfg]#  systemctl start kube-proxy.service 
[root@localhost cfg]#  systemctl enable kube-proxy.service 
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.

4.2、在master上操作

4.2.1、查看请求

[root@localhost kubeconfig]# kubectl get csr
NAME                                                   AGE     REQUESTOR           CONDITION
node-csr-HF551kae9hkzNGwirEbd2uAiStXnawpEI-y7nLP4sTU   32m     kubelet-bootstrap   Approved,Issued
node-csr-TszRzIg1T7JcfFKuwrCb82bqv7mcwvX1xZD8KMQJZCY   2m25s   kubelet-bootstrap   Pending

4.2.2、授权许可加入群集

[root@localhost kubeconfig]# kubectl certificate approve node-csr-TszRzIg1T7JcfFKuwrCb82bqv7mcwvX1xZD8KMQJZCY
certificatesigningrequest.certificates.k8s.io/node-csr-TszRzIg1T7JcfFKuwrCb82bqv7mcwvX1xZD8KMQJZCY approved

4.2.3、查看群集中的节点

[root@localhost kubeconfig]# kubectl get node
NAME             STATUS     ROLES    AGE   VERSION
192.168.35.101   Ready      <none>   24m   v1.12.3
192.168.35.102   Ready      <none>   17m     v1.12.3

看完上述内容,你们对怎样进行Kubernetes二进制部署中的单节点部署有进一步的了解吗?如果还想了解更多知识或者相关内容,请关注亿速云行业资讯频道,感谢大家的支持。

推荐阅读:
  1. Kubernetes二进制部署——负载均衡部署(3)
  2. Kubernetes二进制部署之UI界面部署

免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。

kubernetes

上一篇:怎么理解JavaScript中的BOM和DOM

下一篇:怎么使用Dart

相关阅读

您好,登录后才能下订单哦!

密码登录
登录注册
其他方式登录
点击 登录注册 即表示同意《亿速云用户服务条款》