在 Debian 上部署 Kubernetes 的资源隔离与管理
一 架构与前提
二 分层隔离与管理实践
三 关键配置示例
# dev-namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: dev
---
# dev-quota.yaml
apiVersion: v1
kind: ResourceQuota
metadata:
name: dev-quota
namespace: dev
spec:
hard:
pods: "10"
requests.cpu: "4"
requests.memory: "8Gi"
limits.cpu: "8"
limits.memory: "16Gi"
执行: kubectl apply -f dev-namespace.yaml kubectl apply -f dev-quota.yaml
# dev-limitrange.yaml
apiVersion: v1
kind: LimitRange
metadata:
name: dev-limitrange
namespace: dev
spec:
limits:
- default:
cpu: "200m"
memory: "256Mi"
defaultRequest:
cpu: "100m"
memory: "128Mi"
type: Container
执行:kubectl apply -f dev-limitrange.yaml
# app.yaml
apiVersion: v1
kind: Pod
metadata:
name: app
namespace: dev
spec:
containers:
- name: app
image: nginx:1.25
resources:
requests:
cpu: "200m"
memory: "256Mi"
limits:
cpu: "500m"
memory: "512Mi"
说明:同时设置 requests/limits 时,调度按 requests 选择节点,运行时 CPU 在 requests~limits 间弹性、内存严格受限于 limits;仅设 limits 时 requests 默认等于 limits;未设 requests 的 Pod 可能被调度到高负载节点,未设 limits 的内存存在 OOM 风险。
# 给节点打污点
kubectl taint nodes node1 dedicated=monitoring:NoSchedule
# 监控组件容忍污点
apiVersion: v1
kind: Pod
metadata:
name: monitoring
namespace: dev
spec:
tolerations:
- key: "dedicated"
operator: "Equal"
value: "monitoring"
effect: "NoSchedule"
containers:
- name: exporter
image: prom/node-exporter
执行:kubectl apply -f monitoring.yaml
# deny-all.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-all
namespace: dev
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
说明:先默认拒绝,再为需要通信的服务显式创建 Ingress/Egress 规则,实现命名空间或服务级别的细粒度网络隔离。
四 监控与日常运维
五 常见误区与建议