定义
Kubernetes命令的备忘单。
Master:控制Kubernetes节点的机器。这是所有任务分配的起点。
Node:这些计算机执行请求的分配任务。 Kubernetes主机控制它们。
Pod:一组部署到单个节点的一个或多个容器。容器中的所有容器共享一个IP地址,IPC,主机名和其他资源。 Pod将网络和存储从底层容器中抽象出来。这使您可以更轻松地在集群中移动容器。
Replication controller:此控件控制应在集群中某处运行的Pod的相同副本数。
Service:这将工作定义与吊舱分离。 Kubernetes服务代理将服务请求自动发送到正确的Pod,无论它移到群集中的哪个位置,甚至被替换。
Kubelet:此服务在节点上运行,读取容器清单,并确保已定义的容器已启动并正在运行。
kubectl:这是Kubernetes的命令行配置工具。
Kubectl Alias
1 |
$ alias k=kubectl |
Cluster Info
1 2 |
$ kubectl config get-clusters $ kubectl cluster-info |
上下文(Context)
1 2 3 4 5 6 7 8 |
Get a list of contexts. $ kubectl config get-contexts Get the current context. $ kubectl config current-context Switch current context. $ kubectl config use-conext docker-desktop Set default namesapce $ kubectl config set-context $(kubectl config current-context) --namespace=my-namespace |
Get Commands
1 2 3 4 5 6 7 8 9 10 11 12 |
$ kubectl get all $ kubectl get namespaces $ kubectl get services $ kubectl get replicationcontroller $ kubectl get deployments $ kubectl get ingress $ kubectl get configmaps $ kubectl get nodes $ kubectl get pods $ kubectl get rs $ kubectl get svc kuard $ kubectl get endpoints kuard |
可以添加到上述命令的其他开关:
-o宽-显示更多信息。
–watch或-w-监视更改。
#命名空间
–namespace-获取特定命名空间的资源。
您可以为当前上下文设置默认名称空间,如下所示:
1 |
$ kubectl config set-context $(kubectl config current-context) --namespace=my-namespace |
Labels
1 2 3 4 |
$ kubectl get pods --show-labels Get pods by label. $ kubectl get pods -l environment=production,tier!=frontend $ kubectl get pods -l 'environment in (production,test),tier notin (frontend,backend)' |
Describe Command
1 2 3 4 5 |
$ kubectl describe nodes [id] $ kubectl describe pods [id] $ kubectl describe rs [id] $ kubectl describe svc kuard [id] $ kubectl describe endpoints kuard [id] |
Delete Command
1 2 3 4 5 |
$ kubectl delete nodes [id] $ kubectl delete pods [id] $ kubectl delete rs [id] $ kubectl delete svc kuard [id] $ kubectl delete endpoints kuard [id] |
强制删除Pod,而无需等待其正常关闭
1 |
$ kubectl delete pod-name --grace-period=0 --force |
Create vs Apply
kubectl create可用于创建新资源,而kubectl应用插入或更新资源,同时保持任何手动更改(例如缩放吊舱)。
–record-将当前命令作为注释添加到资源。
–recursive-在指定目录中递归查找yaml。
Create Pod
1 2 |
$ kubectl run kuard --generator=run-pod/v1 --image=gcr.io/kuar-demo/kuard-amd64:1 --output yaml --export --dry-run > kuard-pod.yml $ kubectl apply -f kuard-pod.yml |
Create Deployment
1 2 |
$ kubectl run kuard --image=gcr.io/kuar-demo/kuard-amd64:1 --output yaml --export --dry-run > kuard-deployment.yml $ kubectl apply -f kuard-deployment.yml |
Create Service
1 2 |
$ kubectl expose deployment kuard --port 8080 --target-port=8080 --output yaml --export --dry-run > kuard-service.yml $ kubectl apply -f kuard-service.yml |
Export YAML for New Pod
1 |
$ kubectl run my-cool-app —-image=me/my-cool-app:v1 --output yaml --export --dry-run > my-cool-app.yaml |
Export YAML for Existing Object
1 |
$ kubectl get deployment my-cool-app --output yaml --export > my-cool-app.yaml |
Logs
1 2 3 4 5 6 7 8 |
Get logs. $ kubectl logs -l app=kuard Get logs for previously terminated container. $ kubectl logs POD_NAME --previous Watch logs in real time. $ kubectl attach POD_NAME Copy files out of pod (Requires tar binary in container). $ kubectl cp POD_NAME:/var/log . |
Port Forward
1 |
$ kubectl port-forward deployment/kuard 8080:8080 |
Scaling
1 2 |
Update replicas. $ kubectl scale deployment nginx-deployment --replicas=10 |
Autoscaling
1 |
kubectl autoscale deployment nginx-deployment --min=10 --max=15 --cpu-percent=80 |
Rollout
1 2 3 4 5 6 7 8 9 10 11 12 13 |
Get rollout status. $ kubectl rollout status deployment/nginx-deployment Waiting for rollout to finish: 2 out of 3 new replicas have been updated... deployment "nginx-deployment" successfully rolled out Get rollout history. $ kubectl rollout history deployment/nginx-deployment $ kubectl rollout history deployment/nginx-deployment --revision= Undo a rollout. $ kubectl rollout undo deployment/nginx-deployment $ kubectl rollout undo deployment/nginx-deployment --to-revision=2 Pause/resume a rollout $ kubectl rollout pause deployment/nginx-deployment $ kubectl rollout resume deploy/nginx-deployment |
Pod Example
1 2 3 4 5 6 7 8 9 10 11 12 13 |
apiVersion: v1 kind: Pod metadata: name: cuda-test spec: containers: - name: cuda-test image: "k8s.gcr.io/cuda-vector-add:v0.1" resources: limits: nvidia.com/gpu: 1 nodeSelector: accelerator: nvidia-tesla-p100 |
Deployment Example
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 |
apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment namespace: my-namespace labels: - environment: production, - teir: frontend annotations: - key1: value1, - key2: value2 spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.7.9 ports: - containerPort: 80 |
Dashboard
1 2 |
Enable proxy $ kubectl proxy |
原文:https://architecturecoding.com/series/kubernetes-series.html