您好,登录后才能下订单哦!
这篇文章将为大家详细讲解有关kubeadm中怎么创建单个主节点集群,文章内容质量较高,因此小编分享给大家做个参考,希望大家阅读完这篇文章后对相关知识有一定的了解。
Area | Maturity Level |
---|---|
Command line UX | GA |
Implementation | GA |
Config file API | beta |
CoreDNS | GA |
kubeadm alpha subcommands | alpha |
High availability | alpha |
DynamicKubeletConfig | alpha |
Self-hosting | alpha |
kubeadm的总体特征状态为GA。 一些子特性,如配置文件API,仍在积极开发中。 随着工具的发展,创建集群的实现可能略有变化,但总体实现应该相当稳定。 根据定义,kubeadm alpha下的任何命令都在alpha级别上受支持。
一台或多台运行 deb/rpm-compatible 操作系统的机器,例如Ubuntu或CentOS
每台机器2 GB或更多内存
master主机2个CPU以上
集群中所有机器之间的全网络连接(公共网络或私有网络都是好的)
kubeadmin安装查看https://my.oschina.net/jennerlo/blog/3007440
master是控制面板组件运行的机器,包括etcd(集群数据库)和API服务器(kubectl CLI与之通信)。
选择pod网络附加组件,并验证是否需要将任何参数传递给kubeadm初始化。 根据您选择的第三方供应商,您可能需要将 --pod-network-cidr
设置为特定于供应商的值。 查看安装pod网络附加组件。
(可选)除非另外有指定,kubeadm使用与默认网关关联的网络接口来广告master的IP。
要使用不同的网络接口, 执行 kubeadm init
时要指定 --apiserver-advertise-address=<ip-address>
参数。 要使用IPv6寻址部署IPv6 Kubernetes集群,必须指定IPv6地址,例如 --apiserver-advertise-address=fd00::101
(可选)在 kubeadm init
之前运行 kubeadm config images pull
,以验证与 gcr.io registries 的连接。
初始化命令:
kubeadm init <args>
有关kubeadm init参数的更多信息,请查看 kubeadm参考指南。
获取配置选项的完整列表, 请参阅配置文件文档
要自定义控制面板组件,包括可选的IPv6分配到控制面板组件和etcd服务器的活动探针,为每个组件提供额外的参数,如自定义参数中所述。
要再次运行kubeadm init,必须首先拆毁集群。
如果您将具有不同体系结构的节点加入到集群中,请为节点上的kube-proxy
和kube-dns
创建单独的Deployment或DaemonSet。 这是因为这些组件的Docker镜像目前不支持多架构。
kubeadm init
首先运行一系列预检查,以确保机器已经准备好运行Kubernetes。 这些预检查抛出警告并在错误时退出。 kubeadm init
然后下载并安装集群控制平面组件。 这可能需要几分钟。输出应该是这样的:
[init] Using Kubernetes version: vX.Y.Z [preflight] Running pre-flight checks [kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0) [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [kubeadm-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.138.0.4] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf" [controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" [init] This often takes around a minute; or longer if the control plane images have to be pulled. [apiclient] All control plane components are healthy after 39.511972 seconds [uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [markmaster] Will mark node master as master by adding a label and a taint [markmaster] Master master tainted and labelled with key/value: node-role.kubernetes.io/master="" [bootstraptoken] Using token: <token> [bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run (as a regular user): mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the addon options listed at: http://kubernetes.io/docs/admin/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join --token <token> <master-ip>:<master-port> --discovery-token-ca-cert-hash sha256:<hash>
要使kubectl为非root用户环境下工作,请运行这些命令:
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
如果是root用户,则可以运行:
export KUBECONFIG=/etc/kubernetes/admin.conf
token用于主节点和连接节点之间的相互身份验证。 这里包含的token是机密的。确保它的安全性,因为任何具有此token的人都可以将经过身份验证的节点添加到集群中。
关于kubeadm中怎么创建单个主节点集群就分享到这里了,希望以上内容可以对大家有一定的帮助,可以学到更多知识。如果觉得文章不错,可以把它分享出去让更多的人看到。
免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。