Kubernetes常用yaml模版及命令行

权限控制 RBAC

创建Role/ClusterRole
kubectl create clusterrole/role NAME --verb=["get", "list", "watch", "create", "update", "patch", "delete"] --resource=deployments,statefulsets,daemonsets...

创建ServiceAccount
kubectl -n app-team1 create serviceaccount cicd-token

创建Rolebinding
kubectl create rolebinding NAME --clusterrole=NAME|--role=NAME [--user=username] [--group=groupname]
[--serviceaccount=namespace:serviceaccountname]

Deployment 使用 ServiceAcount

kubectl -n frontend set serviceaccount deployments frontend-deployment app

查看 pod 的 CPU

kubectl top pod -l key=value --sort-by=cpu/memory -A

NetworkPolicy

NetworkPolicy 的示例

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: test-network-policy
  namespace: default
spec:
  podSelector:
    matchLabels:
      role: db
  policyTypes:
    - Ingress
    - Egress
  ingress:
    - from:
        - ipBlock:
            cidr: 172.17.0.0/16
            except:
              - 172.17.1.0/24
        - namespaceSelector:
            matchLabels:
              project: myproject
        - podSelector:
            matchLabels:
              role: frontend
      ports:
        - protocol: TCP
          port: 6379
  egress:
    - to:
        - ipBlock:
            cidr: 10.0.0.0/24
      ports:
        - protocol: TCP
          port: 5978

允许一个名字空间中所有 Pod 的所有入站连接

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-all-ingress
spec:
  podSelector: {}
  policyTypes:
  - Ingress
  ingress:
  - {}

一个名字空间中所有 Pod默认拒绝所有出站流量 

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-egress
spec:
  podSelector: {}
  policyTypes:
  - Egress

Deployment

样例模版

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - name: http
          containerPort: 80
          protocol: TCP

扩容 deployment 副本数量

kubectl scale deployment NAME --replicas=4

环境变量

apiVersion: v1
kind: Pod
metadata:
  name: envar-demo
  labels:
    purpose: demonstrate-envars
spec:
  containers:
  - name: envar-demo-container
    image: gcr.io/google-samples/node-hello:1.0
    env:
    - name: DEMO_GREETING
      value: "Hello from the environment"
    - name: DEMO_FAREWELL
      value: "Such a sweet sorrow"

金丝雀部署

思路:使用current-chipmunk-deployment和canary-chipmunk-deployment名称及标签进行区分,使用run: dep-svc并被service选择共同承载service后端轮询

Kubernetes常用yaml模版及命令行
……
metadata:
  name: current-chipmunk-deployment #这个根据题目要求修改
  namespace: goshawk
spec:
  replicas: 1 #这里也先修改为 1
  selector:
    matchLabels:
      app: current-chipmunk-deployment #作为区分标签
      run: dep-svc #确保 current-chipmunk-deployment 和 canary-chipmunk-deployment 都有这个公用的标签。
  template:
    metadata:
      labels:
        app: current-chipmunk-deployment
        run: dep-svc #确保 current-chipmunk-deployment 和 canary-chipmunk-deployment 都有这个公用的标签。
……
……
metadata:
  name: canary-chipmunk-deployment #这个根据题目要求修改
  namespace: goshawk
spec:
  replicas: 1 #这里也先修改为 1
  selector:
    matchLabels:
      app: canary-chipmunk-deployment #作为区分标签
      run: dep-svc #确保 current-chipmunk-deployment 和 canary-chipmunk-deployment 都有这个公用的标签。
  template:
    metadata:
      labels:
        app: canary-chipmunk-deployment
        run: dep-svc #确保 current-chipmunk-deployment 和 canary-chipmunk-deployment 都有这个公用的标签。
……

修改镜像,升级与回滚

升级:
kubectl set image deployment NAME nginx=nginx:version
回滚:
kubectl rollout undo deployment NAME [--to-revision=number]

暴露服务 service

service yaml模版

apiVersion: v1
kind: Service
metadata:
  labels:
    app: nginx-deployment
  name: nginx-deployment
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: nginx-deployment
  type: NodePort/ClusterIP

暴露deployment
kubectl expose (-f FILENAME | TYPE NAME) [--port=port] [--protocol=TCP|UDP|SCTP] [--target-port=number-or-name]
[--name=name] [--external-ip=external-ip-of-service] [--type=type] [options]
example:kubectl expose deployment nginx-deployment --port=80 --target-port=80 --type=NodePort --dry-run -o yaml

Ingress

样例模版

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: minimal-ingress
  namespace: default
  annotations:
    kubernetes.io/ingress.class: "nginx" #废弃的注解,但建议加上
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  ingressClassName: nginx-example
  rules:
  - http:
      paths:
      - path: /testpath
        pathType: Prefix
        backend:
          service:
            name: test
            port:
              number: 80

给 ingress-nginx 配置 HTTPS 访问

创建 secret
kubectl create secret tls tls-secret --key tls.key --cert tls.crt

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: example-ingress
  annotations:
    kubernetes.io/ingress.class: "nginx"
spec:
  tls:
  - hosts:
    - example.ingredemo.com
    secretName: tls-secret
  rules:
  - host: example.ingredemo.com
    http:
      paths:
      - path: "/"
        pathType: Prefix
        backend:
          service:
            name: web
            port:
              number: 80

pod

Node selector节点调度

Pod 将被调度到有 disktype=ssd 标签的节点

apiVersion: v1
kind: Pod
metadata:
  name: nginx
  labels:
    env: test
spec:
  containers:
  - name: nginx
    image: nginx
    imagePullPolicy: IfNotPresent
  nodeSelector:
    disktype: ssd

NodeName

Pod 只能运行在节点 kube-01 之上

apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  containers:
  - name: nginx
    image: nginx
  nodeName: kube-01

创建多容器的pod

apiVersion: v1
kind: Pod
metadata:
  name: kucc8
spec:
  containers:
  - name: nginx
    image: nginx
    imagePullPolicy: IfNotPresent
  - name: consul
    image: consul
    imagePullPolicy: IfNotPresent

PV

创建 PersistentVolume 

apiVersion: v1
kind: PersistentVolume
metadata:
  name: task-pv-volume
  labels:
    type: local
spec:
  storageClassName: manual
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce # 模式有 ReadWriteMany 和 ReadOnlyMany 和 ReadWriteOnce
  hostPath:
    path: "/mnt/data"

PVC

创建 PersistentVolumeClaim

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: task-pv-claim
spec:
  storageClassName: manual
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 3Gi

创建 Pod使用PVC

apiVersion: v1
kind: Pod
metadata:
  name: task-pv-pod
spec:
  containers:
    - name: task-pv-container
      image: nginx
      ports:
        - containerPort: 80
          name: "http-server"
      volumeMounts:
        - mountPath: "/usr/share/nginx/html"
          name: task-pv-storage
  volumes:
    - name: task-pv-storage
      persistentVolumeClaim:
        claimName: task-pv-claim

sidecar 代理容器日志

apiVersion: v1
kind: Pod
metadata:
  name: counter
spec:
  containers:
  - name: count
    image: busybox:1.28
    args:
    - /bin/sh
    - -c
    - >
      i=0;
      while true;
      do
        echo "$i: $(date)" >> /var/log/1.log;
        echo "$(date) INFO $i" >> /var/log/2.log;
        i=$((i+1));
        sleep 1;
      done      
    volumeMounts:
    - name: varlog
      mountPath: /var/log
  - name: count-log-1
    image: busybox:1.28
    args: [/bin/sh, -c, 'tail -n+1 -F /var/log/1.log']
    volumeMounts:
    - name: varlog
      mountPath: /var/log
  - name: count-log-2
    image: busybox:1.28
    args: [/bin/sh, -c, 'tail -n+1 -F /var/log/2.log']
    volumeMounts:
    - name: varlog
      mountPath: /var/log
  volumes:
  - name: varlog
    emptyDir: {}

升级集群

1、现有的 Kubernetes 集群正在运行版本 1.24.2。仅将 master 节点上的所有 Kubernetes 控制平面和节点组件升级到版本 1.24.3。确保在升级之前 drain master 节点,并在升级后 uncordon master 节点。请不要升级工作节点,etcd,container 管理器,CNI 插件, DNS 服务或任何其他插件。

kubectl cordon master01

kubectl drain master01 --ignore-daemonsets

apt-get update

apt-cache show kubeadm|grep 1.24.3

apt-get install kubeadm=1.24.3-00

检查 kubeadm 升级后的版本
kubeadm version

验证升级计划,排除 etcd
kubeadm upgrade plan
kubeadm upgrade apply v1.24.3 --etcd-upgrade=false

2、在主节点上升级 kubelet 和 kubectl

升级 kubelet
apt-get install kubelet=1.24.3-00
kubelet --version

升级 kubectl
apt-get install kubectl=1.24.3-00
kubectl version

备份还原etcd

为运行在 https://11.0.1.111:2379 上的现有 etcd 实例创建快照并将快照保存到 /var/lib/backup/etcd-snapshot.db,然后还原位于/data/backup/etcd-snapshot-previous.db 的现有先前快照。提供了以下 TLS 证书和密钥,以通过 etcdctl 连接到服务器。

CA 证书: /opt/KUIN00601/ca.crt
客户端证书: /opt/KUIN00601/etcd-client.crt
客户端密钥: /opt/KUIN00601/etcd-client.key

备份:

export ETCDCTL_API=3

etcdctl --endpoints=https://11.0.1.111:2379 --cacert="/opt/KUIN00601/ca.crt" --cert="/opt/KUIN00601/etcd-client.crt" --key="/opt/KUIN00601/etcd-client.key"snapshot save /var/lib/backup/etcd-snapshot.db

Kubernetes常用yaml模版及命令行

检查:
etcdctl snapshot status /var/lib/backup/etcd-snapshot.db -wtable

还原:

sudo ETCDCTL_API=3 etcdctl --endpoints=https://11.0.1.111:2379 --cacert="/opt/KUIN00601/ca.crt" --cert="/opt/KUIN00601/etcd-client.crt" --key="/opt/KUIN00601/etcd-client.key" snapshot restore /data/backup/etcd-snapshot-previous.db

节点维护

将名为 node02 的 node 设置为不可用,并重新调度该 node 上所有运行的 pods。

kubectl cordon node02

kubectl drain node02 --ignore-daemonsets [--delete-emptydir-data --force]

Job

backoffLimit:默认情况下,Job 会持续运行,除非某个 Pod 失败(restartPolicy=Never) 或者某个容器出错退出(restartPolicy=OnFailure)。 这时,Job 基于前述的 spec.backoffLimit 来决定是否以及如何重试。 一旦重试次数到达 .spec.backoffLimit 所设的上限,Job 会被标记为失败, 其中运行的 Pod 都会被终止。

activeDeadlineSeconds:终止 Job 的另一种方式是设置一个活跃期限。 你可以为 Job 的 .spec.activeDeadlineSeconds 设置一个秒数值。 该值适用于 Job 的整个生命期,无论 Job 创建了多少个 Pod。 一旦 Job 运行时间达到 activeDeadlineSeconds 秒,其所有运行中的 Pod 都会被终止, 并且 Job 的状态更新为 type: Failed 及 reason: DeadlineExceeded

apiVersion: batch/v1
kind: Job
metadata:
  name: pi-with-timeout
spec:
  backoffLimit: 5 # 如果容器失败,则尝试重启容器,但最多尝试重启 5 次。
  activeDeadlineSeconds: 100 # 执行时间超过 100 秒后,kubernetes 会自动终止该 Job
  template:
    spec:
      containers:
      - name: pi
        image: perl:5.34.0
        command: ["perl",  "-Mbignum=bpi", "-wle", "print bpi(2000)"]
      restartPolicy: Never

CronJob 

apiVersion: batch/v1
kind: CronJob
metadata:
  name: hello
spec:
  schedule: "*/5 * * * *" # 每隔 5 分钟执行一次
  successfulJobsHistoryLimit: 2 # 保留 2 个已完成的 Job
  failedJobsHistoryLimit: 4 # 保留 4 个失败的 Job
  jobTemplate:
    spec:
      activeDeadlineSeconds: 8 # 在 8 秒后终止 Pod,必须在 8 秒内完成运行
      template:
        spec:
          containers:
          - name: hello
            image: busybox:1.28
            imagePullPolicy: IfNotPresent
            command:
            - /bin/sh
            - -c
            - date; echo Hello from the Kubernetes cluster
          restartPolicy: OnFailure # 当pod启动失败,重启pod

手动触发一个 cronjob

kubectl create job test-job --from=cronjob/hello

限制 CPU 和内存

---
apiVersion: v1
kind: Pod
metadata:
  name: frontend
spec:
  containers:
  - name: app
    image: images.my-company.example/app:v4
    resources:
      requests:
        memory: "64Mi"
        cpu: "250m"
      limits:
        memory: "128Mi"
        cpu: "500m"
  - name: log-aggregator
    image: images.my-company.example/log-aggregator:v6
    resources:
      requests:
        memory: "64Mi"
        cpu: "250m"
      limits:
        memory: "128Mi"
        cpu: "500m"

先查看 namespace haddock 的 LimitRange 详情

kubectl -n haddock get limitrange

安全上下文

  • allowPrivilegeEscalation:控制进程是否可以获得超出其父进程的特权。 此布尔值直接控制是否为容器进程设置 no_new_privs标志。 当容器满足一下条件之一时,allowPrivilegeEscalation 总是为 true:
    • 以特权模式运行,或者
    • 具有 CAP_SYS_ADMIN 权能
  • readOnlyRootFilesystem:以只读方式加载容器的根文件系统。

为 Pod 设置安全性上下文

apiVersion: v1
kind: Pod
metadata:
  name: security-context-demo
spec:
  securityContext:
    runAsUser: 1000
    runAsGroup: 3000
    fsGroup: 2000
  volumes:
  - name: sec-ctx-vol
    emptyDir: {}
  containers:
  - name: sec-ctx-demo
    image: busybox:1.28
    command: [ "sh", "-c", "sleep 1h" ]
    volumeMounts:
    - name: sec-ctx-vol
      mountPath: /data/demo
    securityContext:
      allowPrivilegeEscalation: false
      readOnlyRootFilesystem:true

为 Container 设置安全性上下文 

Container 设置的安全性配置仅适用于该容器本身,并且所指定的设置在与 Pod 层面设置的内容发生重叠时,会重写 Pod 层面的设置

apiVersion: v1
kind: Pod
metadata:
  name: security-context-demo-2
spec:
  securityContext:
    runAsUser: 1000
  containers:
  - name: sec-ctx-demo-2
    image: gcr.io/google-samples/node-hello:1.0
    securityContext:
      runAsUser: 2000 # 进程以用户 2000 运行
      allowPrivilegeEscalation: false
      readOnlyRootFilesystem:true

为容器添加 CAP_NET_ADMIN 和 CAP_SYS_TIME 权能

apiVersion: v1
kind: Pod
metadata:
  name: security-context-demo-4
spec:
  containers:
  - name: sec-ctx-4
    image: gcr.io/google-samples/node-hello:1.0
    securityContext:
      capabilities:
        add: ["NET_ADMIN", "SYS_TIME"]

ConfigMap

下面是一个将 ConfigMap 以卷的形式进行挂载的 Pod 示例:

apiVersion: v1
kind: Pod
metadata:
  name: mypod
spec:
  containers:
  - name: mypod
    image: redis
    volumeMounts:
    - name: foo
      mountPath: "/etc/foo"
      readOnly: true
  volumes:
  - name: foo
    configMap:
      name: myconfigmap

Secret

创建secret
kubectl create secret generic another-secret --from-literal=key1=value1

将 Secret 数据转换为 base-64 形式
echo -n 'my-app' | base64

Secret 数据解码

echo -n 'xxxxx' | base64 -d

使用来自多个 Secret 的数据定义环境变量

apiVersion: v1
kind: Pod
metadata:
  name: envvars-multiple-secrets
spec:
  containers:
  - name: envars-test-container
    image: nginx
    env:
    - name: BACKEND_USERNAME
      valueFrom:
        secretKeyRef:
          name: backend-user
          key: backend-username
    - name: DB_USERNAME
      valueFrom:
        secretKeyRef:
          name: db-user
          key: db-username

创建一个 Pod,在其中访问包含 SSH 密钥的 Secret,并通过卷的方式来使用它

创建包含一些 SSH 密钥的 Secret:

kubectl create secret generic ssh-key-secret --from-file=ssh-privatekey=/path/to/.ssh/id_rsa --from-file=ssh-publickey=/path/to/.ssh/id_rsa.pub
apiVersion: v1
kind: Pod
metadata:
  name: secret-test-pod
  labels:
    name: secret-test
spec:
  volumes:
  - name: secret-volume
    secret:
      secretName: ssh-key-secret
  containers:
  - name: ssh-test-container
    image: mySshImage
    volumeMounts:
    - name: secret-volume
      readOnly: true
      mountPath: "/etc/secret-volume"

Pod 健康检查 livenessProbe、readinessProbe

定义一个存活态 HTTP 请求接口

apiVersion: v1
kind: Pod
metadata:
  labels:
    test: liveness
  name: liveness-http
spec:
  containers:
  - name: liveness
    image: registry.k8s.io/liveness
    args:
    - /server
    livenessProbe:
      httpGet:
        path: /healthz # 探测路径
        port: 8080 # 探测端口
        httpHeaders:
        - name: Custom-Header
          value: Awesome
      initialDelaySeconds: 3
      periodSeconds: 3

定义 TCP 的存活探测 

apiVersion: v1
kind: Pod
metadata:
  name: goproxy
  labels:
    app: goproxy
spec:
  containers:
  - name: goproxy
    image: registry.k8s.io/goproxy:0.1
    ports:
    - containerPort: 8080
    readinessProbe:
      tcpSocket:
        port: 8080
      initialDelaySeconds: 5
      periodSeconds: 10
    livenessProbe:
      tcpSocket:
        port: 8080
      initialDelaySeconds: 15 # 在执行第一次探测前应该等待 15 秒
      periodSeconds: 20 # 执行探测的时间间隔为 20 秒

资源配额 Quota

在 qutt 命名空间,创建一个名为 myquota 的 Quota ,该资源 Quota 具有 1 个 CPU,1G 内存和 2 个 pod 的硬限制

apiVersion: v1
kind: ResourceQuota
metadata:
  name: myquota
  namespace: qutt
spec:
  hard:
    cpu: "1"
    memory: 1G
    pods: "2"

发布者:LJH,转发请注明出处:https://www.ljh.cool/33750.html

(1)
上一篇 2023年2月25日 下午7:50
下一篇 2023年3月30日 下午7:05

相关推荐

发表回复

您的电子邮箱地址不会被公开。 必填项已用*标注