您好,登录后才能下订单哦!
密码登录
登录注册
点击 登录注册 即表示同意《亿速云用户服务条款》
怎样使用Kubeadm安装Kubernetes1.5版本,针对这个问题,这篇文章详细介绍了相对应的分析和解答,希望可以帮助更多想解决这个问题的小伙伴找到更简单易行的方法。
使用Kubeadm安装Kubernetes1.5版本 1、系统版本:ubuntu16.04 root@master:~# docker version Client: Version: 1.12.1 API version: 1.24 Go version: go1.6.2 Git commit: 23cf638 Built: Tue, 27 Sep 2016 12:25:38 +1300 OS/Arch: linux/amd64 Server: Version: 1.12.1 API version: 1.24 Go version: go1.6.2 Git commit: 23cf638 Built: Tue, 27 Sep 2016 12:25:38 +1300 OS/Arch: linux/amd64 1、部署前提条件 每台主机上面至少1G内存。 所有主机之间网络可达。 2、部署: 在主机上安装kubelet和kubeadm curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - 我是用的是 http://119.29.98.145:8070/zhi/apt-key.gpg 主机master上操作如下: curl -s http://119.29.98.145:8070/zhi/apt-key.gpg | apt-key add - cat <<EOF >/etc/apt/sources.list.d/kubernetes.list deb http://apt.kubernetes.io/ kubernetes-xenial main EOF apt-get update apt-get install -y docker.io apt-get install -y kubelet kubeadm kubectl kubernetes-cni 下载后的kube组件并未自动运行起来。在 /lib/systemd/system下面我们能看到kubelet.service root@master:~# ls /lib/systemd/system |grep kube kubelet.service kubelet的版本: root@master:~# kubelet --version Kubernetes v1.5.1 k8s的核心组件都有了,接下来我们就要boostrap kubernetes cluster了。 3、初始化集群 理论上通过kubeadm使用init和join命令即可建立一个集群,这init就是在master节点对集群进行初始化。和k8s 1.4之前的部署方式不同的是, kubeadm安装的k8s核心组件都是以容器的形式运行于master node上的。因此在kubeadm init之前,最好给master node上的docker engine挂上加速器代理, 因为kubeadm要从gcr.io/google_containers repository中pull许多核心组件的images 在Kubeadm的文档中,Pod Network的安装是作为一个单独的步骤的。kubeadm init并没有为你选择一个默认的Pod network进行安装。 我们将首选Flannel 作为我们的Pod network,这不仅是因为我们的上一个集群用的就是flannel,而且表现稳定。 更是由于Flannel就是coreos为k8s打造的专属overlay network add-ons。甚至于flannel repository的readme.md都这样写着:“flannel is a network fabric for containers, designed for Kubernetes”。 如果我们要使用Flannel,那么在执行init时,按照kubeadm文档要求,我们必须给init命令带上option:–pod-network-cidr=10.244.0.0/16。 4、执行kubeadm init 执行kubeadm init命令: root@master:~# kubeadm init --pod-network-cidr=10.244.0.0/16 [kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters. [preflight] Running pre-flight checks [preflight] Starting the kubelet service [init] Using Kubernetes version: v1.5.1 [tokens] Generated token: "2909ca.c0b0772a8817f9e3" [certificates] Generated Certificate Authority key and certificate. [certificates] Generated API Server key and certificate [certificates] Generated Service Account signing keys [certificates] Created keys and certificates in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [apiclient] Created API client, waiting for the control plane to become ready [apiclient] All control plane components are healthy after 14.761716 seconds [apiclient] Waiting for at least one node to register and become ready [apiclient] First node is ready after 1.003312 seconds [apiclient] Creating a test deployment [apiclient] Test deployment succeeded [token-discovery] Created the kube-discovery deployment, waiting for it to become ready [token-discovery] kube-discovery is ready after 1.002402 seconds [addons] Created essential addon: kube-proxy [addons] Created essential addon: kube-dns Your Kubernetes master has initialized successfully! You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: http://kubernetes.io/docs/admin/addons/ You can now join any number of machines by running the following on each node: kubeadm join --token=2909ca.c0b0772a8817f9e3 xxx.xxx.xxx.xxx (ip记下) init成功后的master node有啥变化?k8s的核心组件均正常启动: root@master:~# ps -ef |grep kube root 23817 1 2 14:07 ? 00:00:35 /usr/bin/kubelet --kubeconfig=/etc/kubernetes/kubelet.conf --require-kubeconfig=true --pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true --network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin --cluster-dns=10.96.0.10 --cluster-domain=cluster.local root 23921 23900 0 14:07 ? 00:00:01 kube-scheduler --address=127.0.0.1 --leader-elect --master=127.0.0.1:8080 root 24055 24036 0 14:07 ? 00:00:10 kube-apiserver --insecure-bind-address=127.0.0.1 --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota --service-cluster-ip-range=10.96.0.0/12 --service-account-key-file=/etc/kubernetes/pki/apiserver-key.pem --client-ca-file=/etc/kubernetes/pki/ca.pem --tls-cert-file=/etc/kubernetes/pki/apiserver.pem --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem --token-auth-file=/etc/kubernetes/pki/tokens.csv --secure-port=6443 --allow-privileged --advertise-address=master的ip --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --anonymous-auth=false --etcd-servers=http://127.0.0.1:2379 root 24084 24070 0 14:07 ? 00:00:11 kube-controller-manager --address=127.0.0.1 --leader-elect --master=127.0.0.1:8080 --cluster-name=kubernetes --root-ca-file=/etc/kubernetes/pki/ca.pem --service-account-private-key-file=/etc/kubernetes/pki/apiserver-key.pem --cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem --cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem --insecure-experimental-approve-all-kubelet-csrs-for-group=system:kubelet-bootstrap --allocate-node-cidrs=true --cluster-cidr=10.244.0.0/16 root 24242 24227 0 14:07 ? 00:00:00 /usr/local/bin/kube-discovery root 24308 24293 1 14:07 ? 00:00:15 kube-proxy --kubeconfig=/run/kubeconfig root 29457 29441 0 14:09 ? 00:00:00 /opt/bin/flanneld --ip-masq --kube-subnet-mgr root 29498 29481 0 14:09 ? 00:00:00 /bin/sh -c set -e -x; cp -f /etc/kube-flannel/cni-conf.json /etc/cni/net.d/10-flannel.conf; while true; do sleep 3600; done root 30372 30357 0 14:10 ? 00:00:01 /exechealthz --cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null --url=/healthz-dnsmasq --cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1:10053 >/dev/null --url=/healthz-kubedns --port=8080 --quiet root 30682 30667 0 14:10 ? 00:00:01 /kube-dns --domain=cluster.local --dns-port=10053 --config-map=kube-dns --v=2 root 48755 1796 0 14:31 pts/0 00:00:00 grep --color=auto kube 而且以多cotainer的形式启动 root@master:~# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES c4209b1077d2 gcr.io/google_containers/kubedns-amd64:1.9 "/kube-dns --domain=c" 22 minutes ago Up 22 minutes k8s_kube-dns.61e5a20f_kube-dns-2924299975-txh2v_kube-system_f5364cd5-d631-11e6-9d86-0050569c3e9b_fc02f762 0908d6398b0b gcr.io/google_containers/exechealthz-amd64:1.2 "/exechealthz '--cmd=" 22 minutes ago Up 22 minutes k8s_healthz.9d343f54_kube-dns-2924299975-txh2v_kube-system_f5364cd5-d631-11e6-9d86-0050569c3e9b_0ee806f6 0e35e96ca4ac gcr.io/google_containers/dnsmasq-metrics-amd64:1.0 "/dnsmasq-metrics --v" 22 minutes ago Up 22 minutes k8s_dnsmasq-metrics.2bb05ef7_kube-dns-2924299975-txh2v_kube-system_f5364cd5-d631-11e6-9d86-0050569c3e9b_436b9370 3921b4e59aca gcr.io/google_containers/kube-dnsmasq-amd64:1.4 "/usr/sbin/dnsmasq --" 22 minutes ago Up 22 minutes k8s_dnsmasq.f7e18a01_kube-dns-2924299975-txh2v_kube-system_f5364cd5-d631-11e6-9d86-0050569c3e9b_06c5efa7 18513413ba60 gcr.io/google_containers/pause-amd64:3.0 "/pause" 22 minutes ago Up 22 minutes k8s_POD.d8dbe16c_kube-dns-2924299975-txh2v_kube-system_f5364cd5-d631-11e6-9d86-0050569c3e9b_9de0a18d 45132c8d6d3d quay.io/coreos/flannel-git:v0.6.1-28-g5dde68d-amd64 "/bin/sh -c 'set -e -" 23 minutes ago Up 23 minutes k8s_install-cni.fc218cef_kube-flannel-ds-0fnxc_kube-system_22034e49-d632-11e6-9d86-0050569c3e9b_88dffd75 4c2a2e46c808 quay.io/coreos/flannel-git:v0.6.1-28-g5dde68d-amd64 "/opt/bin/flanneld --" 23 minutes ago Up 23 minutes k8s_kube-flannel.5fdd90ba_kube-flannel-ds-0fnxc_kube-system_22034e49-d632-11e6-9d86-0050569c3e9b_2706c3cb ad08c8dd177c gcr.io/google_containers/pause-amd64:3.0 "/pause" 23 minutes ago Up 23 minutes k8s_POD.d8dbe16c_kube-flannel-ds-0fnxc_kube-system_22034e49-d632-11e6-9d86-0050569c3e9b_279d8436 847f00759977 gcr.io/google_containers/kube-proxy-amd64:v1.5.1 "kube-proxy --kubecon" 24 minutes ago Up 24 minutes k8s_kube-proxy.2f62b4e5_kube-proxy-9c0bf_kube-system_f5326252-d631-11e6-9d86-0050569c3e9b_c1f31904 f8da0f38f3e1 gcr.io/google_containers/pause-amd64:3.0 "/pause" 24 minutes ago Up 24 minutes k8s_POD.d8dbe16c_kube-proxy-9c0bf_kube-system_f5326252-d631-11e6-9d86-0050569c3e9b_c340d947 c1efa29640d1 gcr.io/google_containers/kube-discovery-amd64:1.0 "/usr/local/bin/kube-" 24 minutes ago Up 24 minutes k8s_kube-discovery.6907cb07_kube-discovery-1769846148-4rsq9_kube-system_f49933be-d631-11e6-9d86-0050569c3e9b_c4827da2 4c6a646d0b2e gcr.io/google_containers/pause-amd64:3.0 "/pause" 24 minutes ago Up 24 minutes k8s_POD.d8dbe16c_kube-discovery-1769846148-4rsq9_kube-system_f49933be-d631-11e6-9d86-0050569c3e9b_8823b66a ece79181f177 gcr.io/google_containers/pause-amd64:3.0 "/pause" 24 minutes ago Up 24 minutes k8s_dummy.702d1bd5_dummy-2088944543-r2mw3_kube-system_f38f3ede-d631-11e6-9d86-0050569c3e9b_ade728ba 9c3364c623df gcr.io/google_containers/pause-amd64:3.0 "/pause" 24 minutes ago Up 24 minutes k8s_POD.d8dbe16c_dummy-2088944543-r2mw3_kube-system_f38f3ede-d631-11e6-9d86-0050569c3e9b_838c58b5 a64a3363a82b gcr.io/google_containers/kube-controller-manager-amd64:v1.5.1 "kube-controller-mana" 25 minutes ago Up 25 minutes k8s_kube-controller-manager.84edb2e5_kube-controller-manager-master_kube-system_7b7c15f8228e3413d3b0d0bad799b1ea_697ef6ee 27625502c298 gcr.io/google_containers/kube-apiserver-amd64:v1.5.1 "kube-apiserver --ins" 25 minutes ago Up 25 minutes k8s_kube-apiserver.5942f3e3_kube-apiserver-master_kube-system_aeb59dd32f3217b366540250d2c35d8c_38a83844 5b2cc5cb9ac1 gcr.io/google_containers/pause-amd64:3.0 "/pause" 25 minutes ago Up 25 minutes k8s_POD.d8dbe16c_kube-controller-manager-master_kube-system_7b7c15f8228e3413d3b0d0bad799b1ea_2f88a796 e12ef7b3c1f0 gcr.io/google_containers/etcd-amd64:3.0.14-kubeadm "etcd --listen-client" 25 minutes ago Up 25 minutes k8s_etcd.c323986f_etcd-master_kube-system_3a26566bb004c61cd05382212e3f978f_ef6eb513 84a731cbce18 gcr.io/google_containers/pause-amd64:3.0 "/pause" 25 minutes ago Up 25 minutes k8s_POD.d8dbe16c_kube-apiserver-master_kube-system_aeb59dd32f3217b366540250d2c35d8c_a3a2ea4e 612b021457a1 gcr.io/google_containers/kube-scheduler-amd64:v1.5.1 "kube-scheduler --add" 25 minutes ago Up 25 minutes k8s_kube-scheduler.bb7d750_kube-scheduler-master_kube-system_0545c2e223307b5ab8c74b0ffed56ac7_a49fab86 ac0d8698f79f gcr.io/google_containers/pause-amd64:3.0 "/pause" 25 minutes ago Up 25 minutes k8s_POD.d8dbe16c_etcd-master_kube-system_3a26566bb004c61cd05382212e3f978f_9a6b7925 2a16a2217bf3 gcr.io/google_containers/pause-amd64:3.0 "/pause" 25 minutes ago Up 25 minutes k8s_POD.d8dbe16c_kube-scheduler-master_kube-system_0545c2e223307b5ab8c74b0ffed56ac7_d2b51317 kube-apiserver的IP是host ip,从而推断容器使用的是host网络,这从其对应的pause容器的network属性就可以看出: root@master:~# docker ps |grep apiserver 27625502c298 gcr.io/google_containers/kube-apiserver-amd64:v1.5.1 "kube-apiserver --ins" 26 minutes ago Up 26 minutes k8s_kube-apiserver.5942f3e3_kubeapiserver-master_kube-system_aeb59dd32f3217b366540250d2c35d8c_38a83844 84a731cbce18 gcr.io/google_containers/pause-amd64:3.0 "/pause" 26 minutes ago Up 26 minutes k8s_POD.d8dbe16c_kube-apiserver-master_kube-system_aeb59dd32f3217b366540250d2c35d8c_a3a2ea4e 问题一、 如果kubeadm init执行过程中途出现了什么问题,比如前期忘记挂加速器导致init hang住,你可能会ctrl+c退出init执行。重新配置后,再执行kubeadm init,这时你可能会遇到下面kubeadm的输出: # kubeadm init --pod-network-cidr=10.244.0.0/16 [kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters. [preflight] Running pre-flight checks [preflight] Some fatal errors occurred: Port 10250 is in use /etc/kubernetes/manifests is not empty /etc/kubernetes/pki is not empty /var/lib/kubelet is not empty /etc/kubernetes/admin.conf already exists /etc/kubernetes/kubelet.conf already exists [preflight] If you know what you are doing, you can skip pre-flight checks with `--skip-preflight-checks` kubeadm会自动检查当前环境是否有上次命令执行的“残留”。如果有,必须清理后再行执行init。我们可以通过”kubeadm reset”来清理环境,以备重来。 # kubeadm reset [preflight] Running pre-flight checks [reset] Draining node: "iz25beglnhtz" [reset] Removing node: "iz25beglnhtz" [reset] Stopping the kubelet service [reset] Unmounting mounted directories in "/var/lib/kubelet" [reset] Removing kubernetes-managed containers [reset] Deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/etcd] [reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki] [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf] 5、要使用Flannel网络,因此我们需要执行如下安装命令: #kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml configmap "kube-flannel-cfg" created daemonset "kube-flannel-ds" created 需要稍等几秒钟,我们再来看master node上的cluster信息: root@master:~# ps -ef |grep kube |grep flannel root 29457 29441 0 14:09 ? 00:00:00 /opt/bin/flanneld --ip-masq --kube-subnet-mgr root 29498 29481 0 14:09 ? 00:00:00 /bin/sh -c set -e -x; cp -f /etc/kube-flannel/cni-conf.json /etc/cni/net.d/10-flannel.conf; while true; do sleep 3600; done root@master:~# kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system dummy-2088944543-r2mw3 1/1 Running 0 30m kube-system etcd-master 1/1 Running 0 31m kube-system kube-apiserver-master 1/1 Running 0 31m kube-system kube-controller-manager-master 1/1 Running 0 31m kube-system kube-discovery-1769846148-4rsq9 1/1 Running 0 30m kube-system kube-dns-2924299975-txh2v 4/4 Running 0 30m kube-system kube-flannel-ds-0fnxc 2/2 Running 0 29m kube-system kube-flannel-ds-lpgpv 2/2 Running 0 23m kube-system kube-flannel-ds-s05nr 2/2 Running 0 18m kube-system kube-proxy-9c0bf 1/1 Running 0 30m kube-system kube-proxy-t8hxr 1/1 Running 0 18m kube-system kube-proxy-zd0v2 1/1 Running 0 23m kube-system kube-scheduler-master 1/1 Running 0 31m 至少集群的核心组件已经全部run起来了。看起来似乎是成功了。 接下来开始node下的操作 6、minion node:join the cluster 这里我们用到了kubeadm的第二个命令:kubeadm join。 在minion node上执行(注意:这里要保证master node的9898端口在防火墙是打开的): 前提node下需要有上面安装的kube组建 7、安装kubelet和kubeadm curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - 我是用的是 http://119.29.98.145:8070/zhi/apt-key.gpg 主机master上操作如下: curl -s http://119.29.98.145:8070/zhi/apt-key.gpg | apt-key add - cat <<EOF >/etc/apt/sources.list.d/kubernetes.list deb http://apt.kubernetes.io/ kubernetes-xenial main EOF apt-get update apt-get install -y docker.io apt-get install -y kubelet kubeadm kubectl kubernetes-cni 记住master的token root@node01:~# kubeadm join --token=2909ca.c0b0772a8817f9e3 xxx.xxx.xxx.xxx(ip) 8、在master node上查看当前cluster状态: root@master:~# kubectl get node NAME STATUS AGE master Ready,master 59m node01 Ready 51m node02 Ready 46m
关于怎样使用Kubeadm安装Kubernetes1.5版本问题的解答就分享到这里了,希望以上内容可以对大家有一定的帮助,如果你还有很多疑惑没有解开,可以关注亿速云行业资讯频道了解更多相关知识。
免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。