containerd与kubernetes集成部署

发布时间:2020-08-22 09:50:59 作者:juestnow
来源:网络 阅读:919

部署环境

# 操作系统: CentOS Linux release 7.6.1810 (Core)
# kubelet 版本: v1.14.6
# containerd版本:1.3.0
# crictl 版本:v1.16.1
# cni版本:v0.8.2
#工作目录: /apps/k8s
# 二进制文件目录: /usr/local/bin/
# cni 目录:/apps/cni

准备所需二进制文件

wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.16.1/crictl-v1.16.1-linux-amd64.tar.gz
wget https://github.com/containerd/containerd/releases/download/v1.3.0/containerd-1.3.0.linux-amd64.tar.gz
wget https://github.com/containernetworking/plugins/releases/download/v0.8.2/cni-plugins-linux-amd64-v0.8.2.tgz

解压下载文件到相应的目录

tar -xvf containerd-1.3.0.linux-amd64.tar.gz
mv bin/* /usr/local/bin/
tar -xvf crictl-v1.16.1-linux-amd64.tar.gz
mv crictl /usr/local/bin/
# cni 解压
mkdir -p /apps/cni/bin/
tar -xvf cni-plugins-linux-amd64-v0.8.2.tgz -C /apps/cni/bin/

准备配置文件

# containerd 配置文件准备
mkdir -p /apps/k8s/etc/containerd
vi /apps/k8s/etc/containerd/config.toml
----------------------------------------------------------------------
[plugins.opt]
path = "/apps/k8s/containerd"
[plugins.cri]
stream_server_address = "127.0.0.1"
stream_server_port = "10010"
sandbox_image = "docker.io/juestnow/pause-amd64:3.1"
max_concurrent_downloads = 20
  [plugins.cri.containerd]
    snapshotter = "overlayfs"
    [plugins.cri.containerd.default_runtime]
      runtime_type = "io.containerd.runtime.v1.linux"
      runtime_engine = ""
      runtime_root = ""
    [plugins.cri.containerd.untrusted_workload_runtime]
      runtime_type = ""
      runtime_engine = ""
      runtime_root = ""
  [plugins.cri.cni]
    bin_dir = "/apps/cni/bin"
    conf_dir = "/etc/cni/net.d"
[plugins."io.containerd.runtime.v1.linux"]
  shim = "containerd-shim"
  runtime = "runc"
  runtime_root = ""
  no_shim = false
  shim_debug = false
[plugins."io.containerd.runtime.v2.task"]
  platforms = ["linux/amd64"]
-------------------------------------------------------------------
# crictl 配置文件准备
vim /etc/crictl.yaml
------------------------------------------------------------------
  runtime-endpoint: unix:///run/k8s/containerd/containerd.sock
  image-endpoint: unix:///run/k8s/containerd/containerd.sock
  timeout: 10
  debug: false

准备containerd 启动文件

由于先前已经安装了docker containerd.service 文件已经存在,为了保证docker 正常运行 新安装的修改为containerdk8s
vim /usr/lib/systemd/system/containerdk8s.service
-----------------------------------------------------------------------------
[Unit]
Description=Lightweight Kubernetes
Documentation=https://containerd.io
After=network-online.target

[Service]
ExecStartPre=-/sbin/modprobe br_netfilter
ExecStartPre=-/sbin/modprobe overlay
ExecStartPre=-/bin/mkdir -p /run/k8s/containerd
ExecStart=/usr/local/bin/containerd \
         -c /apps/k8s/etc/containerd/config.toml \
         -a /run/k8s/containerd/containerd.sock \
         --state /apps/k8s/run/containerd \
         --root /apps/k8s/containerd 

KillMode=process
Delegate=yes
OOMScoreAdjust=-999
LimitNOFILE=1024000   # 决定容器里面文件打开数可以在这里设置
LimitNPROC=1024000
LimitCORE=infinity
TasksMax=infinity
TimeoutStartSec=0
Restart=always
RestartSec=5s

[Install]
WantedBy=multi-user.target

启动containerd

systemctl start containerdk8s.service
设置开机启动
systemctl enable containerdk8s.service

验证containerd 部署是否正常

crictl ps -a
crictl  images
crictl pull  busybox:1.25.0
[root@ingress-01 tmp]# crictl pull  busybox:1.25.0
crictl pull  busybox:1.25.0Image is up to date for busybox@sha256:a59906e33509d14c036c8678d687bd4eec81ed7c4b8ce907b888c607f6a1e0e6
# 成功拉取容器

kubelet 配置文件以支持containerd

vim /apps/kubernetes/conf/kubelet
----------------------------------------------------------------------------------------------------------------------------
KUBELET_OPTS="--bootstrap-kubeconfig=/apps/kubernetes/conf/bootstrap.kubeconfig \
              --fail-swap-on=false \
              --network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/apps/cni/bin \
              --kubeconfig=/apps/kubernetes/conf/kubelet.kubeconfig \
              --address=192.168.30.36 \
              --node-ip=192.168.30.36 \
              --hostname-override=ingress-01 \
              --cluster-dns=10.64.0.2 \
              --cluster-domain=cluster.local \
              --authorization-mode=Webhook \
              --authentication-token-webhook=true \
              --client-ca-file=/apps/kubernetes/ssl/k8s/k8s-ca.pem \
              --rotate-certificates=true \
              --cgroup-driver=cgroupfs \
              --allow-privileged=true \
              --healthz-port=10248 \
              --healthz-bind-address=192.168.30.36 \
              --cert-dir=/apps/kubernetes/ssl \
              --feature-gates=RotateKubeletClientCertificate=true,RotateKubeletServerCertificate=true \
              --node-labels=node-role.kubernetes.io/k8s-ingress=true \
              --serialize-image-pulls=false \
              --enforce-node-allocatable=pods,kube-reserved,system-reserved \
              --pod-manifest-path=/apps/work/kubernetes/manifests \
              --runtime-cgroups=/systemd/system.slice/kubelet.service \
              --kube-reserved-cgroup=/systemd/system.slice/kubelet.service \
              --system-reserved-cgroup=/systemd/system.slice \
              --root-dir=/apps/work/kubernetes/kubelet \
              --log-dir=/apps/kubernetes/log \
              --alsologtostderr=true \
              --logtostderr=false \
              --anonymous-auth=true \
              --container-log-max-files=10 \
              --container-log-max-size=100Mi \
              --container-runtime=remote \
              --container-runtime-endpoint=unix:///run/k8s/containerd/containerd.sock \
              --containerd=unix:///run/k8s/containerd/containerd.sock \
              --runtime-request-timeout=15m \
              --image-gc-high-threshold=70 \
              --image-gc-low-threshold=50 \
              --kube-reserved=cpu=500m,memory=512Mi,ephemeral-storage=1Gi \
              --system-reserved=cpu=1000m,memory=1024Mi,ephemeral-storage=1Gi \
              --eviction-hard=memory.available<500Mi,nodefs.available<10% \
              --serialize-image-pulls=false \
              --sync-frequency=30s \
              --resolv-conf=/etc/resolv.conf \
              --pod-infra-container-image=docker.io/juestnow/pause-amd64:3.1 \
              --image-pull-progress-deadline=30s \
              --v=2 \
              --event-burst=30 \
              --event-qps=15 \
              --kube-api-burst=30 \
              --kube-api-qps=15 \
              --max-pods=100 \
              --pods-per-core=10 \
              --read-only-port=0 \
              --allowed-unsafe-sysctls 'kernel.msg*,kernel.shm*,kernel.sem,fs.mqueue.*,net.*' \
              --volume-plugin-dir=/apps/kubernetes/kubelet-plugins/volume"
---------------------------------------------------------------------------------------------------------------------------------------------
# 修改启动文件kubelet.service
vim /usr/lib/systemd/system/kubelet.service
--------------------------------------------------------------------------------------------------------------------------------------------
[Unit]
Description=Kubernetes Kubelet
After=containerdk8s.service
Requires=containerdk8s.service

[Service]
ExecStartPre=-/bin/mkdir -p /sys/fs/cgroup/hugetlb/systemd/system.slice/kubelet.service
ExecStartPre=-/bin/mkdir -p /sys/fs/cgroup/blkio/systemd/system.slice/kubelet.service
ExecStartPre=-/bin/mkdir -p /sys/fs/cgroup/cpuset/systemd/system.slice/kubelet.service
ExecStartPre=-/bin/mkdir -p /sys/fs/cgroup/devices/systemd/system.slice/kubelet.service
ExecStartPre=-/bin/mkdir -p /sys/fs/cgroup/net_cls,net_prio/systemd/system.slice/kubelet.service
ExecStartPre=-/bin/mkdir -p /sys/fs/cgroup/perf_event/systemd/system.slice/kubelet.service
ExecStartPre=-/bin/mkdir -p /sys/fs/cgroup/cpu,cpuacct/systemd/system.slice/kubelet.service
ExecStartPre=-/bin/mkdir -p /sys/fs/cgroup/freezer/systemd/system.slice/kubelet.service
ExecStartPre=-/bin/mkdir -p /sys/fs/cgroup/memory/systemd/system.slice/kubelet.service
ExecStartPre=-/bin/mkdir -p /sys/fs/cgroup/pids/systemd/system.slice/kubelet.service
ExecStartPre=-/bin/mkdir -p /sys/fs/cgroup/systemd/systemd/system.slice/kubelet.service
EnvironmentFile=-/apps/kubernetes/conf/kubelet
ExecStart=/apps/kubernetes/bin/kubelet $KUBELET_OPTS
Restart=on-failure
KillMode=process
LimitNOFILE=1024000
LimitNPROC=1024000
LimitCORE=infinity
LimitMEMLOCK=infinity
[Install]
WantedBy=multi-user.target
# 说明在使用docker 时可以不需要创建kubelet.service 目录
# 使用containerd 必须手动创建目录

重启kubelet

# 配置生效
systemctl daemon-reload
# 重启 kubelet
systemctl restart kubelet
# 查看 kubelet 是否启动成功
systemctl status kubelet

验证kubelet 是否使用containerd

[root@ingress-01 ~]# crictl ps
CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID
35df1da048da6       8f04a7056ad34       9 days ago          Running             kube-router         0                   85c23c6b85ebc
48f0dc7df9639       cda2583339c95       9 days ago          Running             consul              4                   9cebd1643a3df
76e5edca510c1       70a40025bbab5       9 days ago          Running             traefik             3                   3f1f2a000a8fa
12f2ccf4702ce       e5a616e4b9cf6       9 days ago          Running             node-exporter       2                   13f2894af33a5
3b8881a826bed       8f81e24b54353       9 days ago          Running             process-exporter    5                   935bfe1a9b028
[root@ingress-01 ~]# crictl images
IMAGE                                                             TAG                 IMAGE ID            SIZE
docker.io/cloudnativelabs/kube-router                             latest              8f04a7056ad34       31.6MB
docker.io/istio/install-cni                                       1.3.0               0f31f2c08c2f3       58.4MB
docker.io/juestnow/pause-amd64                                    3.1                 da86e6ba6ca19       326kB
docker.io/juestnow/process-exporter                               v0.5.0              8f81e24b54353       5.86MB
docker.io/library/alpine                                          latest              961769676411f       2.79MB
docker.io/library/busybox                                         latest              19485c79a9bbd       765kB
docker.io/library/consul                                          1.5.0               cda2583339c95       43.1MB
docker.io/library/nginx                                           latest              f949e7d76d63b       50.7MB
docker.io/library/traefik                                         v1.7.17             70a40025bbab5       24MB
docker.io/prom/node-exporter                                      v0.18.1             e5a616e4b9cf6       11.1MB
# 一切正常
# kubelet 使用containerd 不能监控容器 网络流量是很遗憾的一件事
# 关闭docker 
service docker stop
# 取消docker 开机启动
chkconfig docker off

containerd 单独运行容器

# 创建cni配置
vi /etc/cni/net.d/10-mynet.conf
------------------------------------------------------------------------
{
    "cniVersion": "0.2.0",
    "name": "mynet",
    "type": "bridge",
    "bridge": "cni0",
    "isGateway": true,
    "ipMasq": true,
    "ipam": {
        "type": "host-local",
        "subnet": "10.22.0.0/16",
        "routes": [
            { "dst": "0.0.0.0/0" }
        ]
    }
}
-----------------------------------------------------------------------------

创建启动容器的配置

vi pod-config.json
--------------------------------
  {
      "metadata": {
          "name": "sandbox",
          "namespace": "default",
          "attempt": 1,
          "uid": "hdishd83djaidwnduwk28bcsb"
      },
      "log_directory": "/tmp",
      "linux": {
      }
  }
-------------------------------------
vi container-config.json
-------------------------------------
  {
    "metadata": {
        "name": "busybox"
    },
    "image":{
        "image": "busybox"
    },
    "command": [
        "top"
    ],
    "log_path":"busybox/0.log",
    "linux": {
    }
  }
------------------------
# 创建runp
crictl runp pod-config.json
# 输出一段字符串
crictl create b89dcd8cefcad50d8ae7153e01b7205a1f8497e8de40aa3337e52c116a626c1e container-config.json pod-config.json
# 查看创建容器
crictl ps -a
# 启动容器
crictl start 768ffe572c595
# 进入容器
crictl  exec -ti 768ffe572c595 /bin/sh
# 如果能正常进入容器的话证明一起正常咯

containerd与kubernetes集成部署

推荐阅读:
  1. Kubernetes和Jenkins的安装部署以及github的集成
  2. 如何部署Kubernetes?

免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。

containerd uber ne

上一篇:微信小程序云开发 生成带参小程序码流程

下一篇:vue.js选中动态绑定的radio的指定项

相关阅读

您好,登录后才能下订单哦!

密码登录
登录注册
其他方式登录
点击 登录注册 即表示同意《亿速云用户服务条款》