命令行:kubectl命令行工具
优点:90%以上的场景都可以满足
对资源的增删查比较方面,对改不是很友好
缺点:命令比较冗长,复杂,难记
声明式:
k8s当中的yaml文件来实现资源管理---声明式
GUI:图形化工具的管理
1、kubectl命令的详解,查看 部署 查看pod的情况(详细信息,日志,发布和回滚)
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.15", GitCommit:"8f1e5bf0b9729a899b8df86249b56e2c74aebc55", GitTreeState:"clean", BuildDate:"2022-01-19T17:27:39Z", GoVersion:"go1.15.15", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.15", GitCommit:"8f1e5bf0b9729a899b8df86249b56e2c74aebc55", GitTreeState:"clean", BuildDate:"2022-01-19T17:23:01Z", GoVersion:"go1.15.15", Compiler:"gc", Platform:"linux/amd64"}
NAME SHORTNAMES APIVERSION NAMESPACED KIND
bindings v1 true Binding
componentstatuses cs v1 false ComponentStatus
configmaps cm v1 true ConfigMap
endpoints ep v1 true Endpoints
events ev v1 true Event
limitranges limits v1 true LimitRange
namespaces ns v1 false Namespace
nodes no v1 false Node
persistentvolumeclaims pvc v1 true PersistentVolumeClaim
persistentvolumes pv v1 false PersistentVolume
pods po v1 true Pod
podtemplates v1 true PodTemplate
replicationcontrollers rc v1 true ReplicationController
resourcequotas quota v1 true ResourceQuota
secrets v1 true Secret
serviceaccounts sa v1 true ServiceAccount
services svc v1 true Service
mutatingwebhookconfigurations admissionregistration.k8s.io/v1 false MutatingWebhookConfiguration
validatingwebhookconfigurations admissionregistration.k8s.io/v1 false ValidatingWebhookConfiguration
customresourcedefinitions crd,crds apiextensions.k8s.io/v1 false CustomResourceDefinition
apiservices apiregistration.k8s.io/v1 false APIService
controllerrevisions apps/v1 true ControllerRevision
daemonsets ds apps/v1 true DaemonSet
deployments deploy apps/v1 true Deployment
replicasets rs apps/v1 true ReplicaSet
statefulsets sts apps/v1 true StatefulSet
tokenreviews authentication.k8s.io/v1 false TokenReview
localsubjectaccessreviews authorization.k8s.io/v1 true LocalSubjectAccessReview
selfsubjectaccessreviews authorization.k8s.io/v1 false SelfSubjectAccessReview
selfsubjectrulesreviews authorization.k8s.io/v1 false SelfSubjectRulesReview
subjectaccessreviews authorization.k8s.io/v1 false SubjectAccessReview
horizontalpodautoscalers hpa autoscaling/v1 true HorizontalPodAutoscaler
cronjobs cj batch/v1beta1 true CronJob
jobs batch/v1 true Job
certificatesigningrequests csr certificates.k8s.io/v1 false CertificateSigningRequest
leases coordination.k8s.io/v1 true Lease
endpointslices discovery.k8s.io/v1beta1 true EndpointSlice
events ev events.k8s.io/v1 true Event
ingresses ing extensions/v1beta1 true Ingress
flowschemas flowcontrol.apiserver.k8s.io/v1beta1 false FlowSchema
prioritylevelconfigurations flowcontrol.apiserver.k8s.io/v1beta1 false PriorityLevelConfiguration
ingressclasses networking.k8s.io/v1 false IngressClass
ingresses ing networking.k8s.io/v1 true Ingress
networkpolicies netpol networking.k8s.io/v1 true NetworkPolicy
runtimeclasses node.k8s.io/v1 false RuntimeClass
poddisruptionbudgets pdb policy/v1beta1 true PodDisruptionBudget
podsecuritypolicies psp policy/v1beta1 false PodSecurityPolicy
clusterrolebindings rbac.authorization.k8s.io/v1 false ClusterRoleBinding
clusterroles rbac.authorization.k8s.io/v1 false ClusterRole
rolebindings rbac.authorization.k8s.io/v1 true RoleBinding
roles rbac.authorization.k8s.io/v1 true Role
priorityclasses pc scheduling.k8s.io/v1 false PriorityClass
csidrivers storage.k8s.io/v1 false CSIDriver
csinodes storage.k8s.io/v1 false CSINode
storageclasses sc storage.k8s.io/v1 false StorageClass
volumeattachments storage.k8s.io/v1 false VolumeAttachment
Kubernetes control plane is running at https://20.0.0.70:6443
KubeDNS is running at https://20.0.0.70:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health":"true"}
NAME READY STATUS RESTARTS AGE
myapp-test-5d94dbb4f-2gh4f 1/1 Running 0 55m
myapp-test-5d94dbb4f-t2294 1/1 Running 0 19h
nginx-dn-6d6cd9c7c5-6fnhr 1/1 Running 0 36m
NAME STATUS AGE
default Active 20h
kube-node-lease Active 20h
kube-public Active 20h
kube-system Active 20h
kubernetes-dashboard Active 20h
xiaobu Active 33m
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
myapp-test-5d94dbb4f-2gh4f 1/1 Running 0 56m 10.244.1.10 node01 <none> <none>
myapp-test-5d94dbb4f-t2294 1/1 Running 0 19h 10.244.2.9 node02 <none> <none>
nginx-dn-6d6cd9c7c5-6fnhr 1/1 Running 0 37m 10.244.2.10 node02 <none>
NAME STATUS ROLES AGE VERSION
master01 Ready <none> 20h v1.20.15
node01 Ready <none> 20h v1.20.15
node02 Ready <none> 20h v1.20.15
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
master01 Ready <none> 20h v1.20.15 20.0.0.70 <none> CentOS Linux 7 (Core) 3.10.0-693.el7.x86_64 docker://24.0.7
node01 Ready <none> 20h v1.20.15 20.0.0.71 <none> CentOS Linux 7 (Core) 3.10.0-693.el7.x86_64 docker://24.0.7
node02 Ready <none> 20h v1.20.15 20.0.0.72 <none> CentOS Linux 7 (Core) 3.10.0-693.el7.x86_64 docker://24.0.7
Name: nginx-dn-6d6cd9c7c5-6fnhr
Namespace: default
Priority: 0
Node: node02/20.0.0.72
Start Time: Thu, 28 Dec 2023 20:57:12 -0500
Labels: app=nginx-dn
pod-template-hash=6d6cd9c7c5
Annotations: <none>
Status: Running
IP: 10.244.2.10
IPs:
IP: 10.244.2.10
Controlled By: ReplicaSet/nginx-dn-6d6cd9c7c5
Containers:
nginx:
Container ID: docker://a51290063d3514774191132c4af4d5a74e2062904b054b0e2a8e3117b1a103d4
Image: nginx
Image ID: docker-pullable://nginx@sha256:0d17b565c37bcbd895e9d92315a05c1c3c9a29f762b011a10c54a66cd53c9b31
Port: <none>
Host Port: <none>
State: Running
Started: Thu, 28 Dec 2023 20:57:29 -0500
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-sgjrp (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-sgjrp:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-sgjrp
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 38m default-scheduler Successfully assigned default/nginx-dn-6d6cd9c7c5-6fnhr to node02
Normal Pulling 38m kubelet Pulling image "nginx"
Normal Pulled 38m kubelet Successfully pulled image "nginx" in 15.461380913s
Normal Created 38m kubelet Created container nginx
Normal Started 38m kubelet Started container nginx
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2023/12/29 01:57:29 [notice] 1#1: using the "epoll" event method
2023/12/29 01:57:29 [notice] 1#1: nginx/1.21.5
2023/12/29 01:57:29 [notice] 1#1: built by gcc 10.2.1 20210110 (Debian 10.2.1-6)
2023/12/29 01:57:29 [notice] 1#1: OS: Linux 3.10.0-693.el7.x86_64
2023/12/29 01:57:29 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 65536:65536
2023/12/29 01:57:29 [notice] 1#1: start worker processes
2023/12/29 01:57:29 [notice] 1#1: start worker process 31
2023/12/29 01:57:29 [notice] 1#1: start worker process 32
2023/12/29 01:57:29 [notice] 1#1: start worker process 33
2023/12/29 01:57:29 [notice] 1#1: start worker process 34
10.244.0.0 - - [29/Dec/2023:02:15:32 +0000] "GET / HTTP/1.1" 200 615 "-" "curl/7.29.0" "-"
先声明动作:create delete get 对象:ns pod service xiaobu nginx-test-5d94dbb4f-rq549 ?nginx1 nginx2 -n 命名空间
deployment的部署pod
deployment的两种部署
陈述式部署:命令行
声明式:yaml文件部署
滚动更新:不是一次性的把所有pod全部部署,而是一个个来
,pod的更新时使用,逐步的引入新的pod,逐步的减少旧的pod
自我修复:如果有pod节点发生故障,deployment会自动启动新的pod来进行代替
回滚:如果更新有问题,deployment会提供还原点,可以手动还原到未更新前的状态
扩缩容:deployment可以随时调整pod的数量,以适应流量的变化
上述的功能必须是基于deployment创建的服务才可以,绝大多数的pod都是使用deployment创建的
kubectl get deployments.apps 查看命名空间的deployment方式创建的pod的
kubectl get deployments.apps -n kube-system
查看默认创建的pod的
daemonset:不能通过命令行创建,只能在yaml文件当中定义这种创建方式
后台运行创建,在每个节点上都创建一个相同方式的,相同版本的容器运行的pod
一般都是依赖环境和重要组件,一般也不会去这些资源进行操作
kubectl create deployment nginx-nd --image=nginx 基于deployment创建pod
kubectl create deployment nginx-dn --image=nginx --replicas=3 -n xiaobu 指定命名空间创建pod
如果是基于deployment方式创建的pod,或者是daemonset方式创建的pod,是由控制器创建的pod,使用delete删除pod是删不掉的,相当于重启pod
kubectl delete deployments.apps nginx-dn -n xiaobu 删除pod
基于deployment方式创建pod,一旦删除deployment,基于这个deployment创阿金pod都会被删除
kubectl run nginx1 --image=nginx 基于run创建的pod
kubectl delete pod nginx1 删除基于run创建的pod
不是基于控制器穿件,会被直接删除
kubectl exec -it nginx-dn-6d6cd9c7c5-6fnhr bash
docker的exec 只能在本机内部使用,不能跨主机,kubectl exec可以跨主机进入容器
kubectl exec -it -n xiaobu nginx-6799fc88d8-skvvq bash 指定进入命名空间创建的pod
kubectl delete pod nginx-dn-6d6cd9c7c5-6fnhr --force --grace-period=0 主要是用于结束卡在销毁状态的pod
grace-period 表示过度存活期,默认是30秒,可以让pod优雅的结束容器内的进程,然后退出pod =0,表示立刻停止pod,必须要force
扩容
kubectl scale deployment nginx-dn --replicas=3
[root@master01 ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
myapp-test-5d94dbb4f-2gh4f 1/1 Running 0 72m
myapp-test-5d94dbb4f-t2294 1/1 Running 0 20h
nginx-dn-6d6cd9c7c5-d94cp 1/1 Running 0 8s
nginx-dn-6d6cd9c7c5-vvtp9 1/1 Running 0 5m49s
nginx-dn-6d6cd9c7c5-wph8g 0/1 ContainerCreating 0 8s
缩容
kubectl scale deployment nginx-dn --replicas=1
kubectl get pod
NAME READY STATUS RESTARTS AGE
myapp-test-5d94dbb4f-2gh4f 1/1 Running 0 73m
myapp-test-5d94dbb4f-t2294 1/1 Running 0 20h
nginx-dn-6d6cd9c7c5-d94cp 0/1 Terminating 0 61s
nginx-dn-6d6cd9c7c5-vvtp9 1/1 Running 0 6m42s
nginx-dn-6d6cd9c7c5-wph8g 0/1 Terminating 0 61s
创建pod时并没有指定副本数,后续也可以对他的副本数进行修改
kubectl create deployment nginx --image=nginx:1.10 --replicas=3 创建pod
service的类型
kubectl get svc 查看service(默认查看)
kubectl get svc -n kube-system 指定命名空间查看service
kubectl delete svc service名称 删除service
ClusterIP:创建service的默认类型,提供一个集群内部的虚拟IP地址,这是service的默认类型,通过这个虚拟IP可以直接访问pod的资源,无法对外提供访问
NodePort: 会在每个node节点上都开放一个相同的端口,外部可以通过node的本机ip+端口,访问pod资源,这是集群外部访问service 资源的一种方式,四层代理方式
nodeip:nodeport 随机指派,也可以指定30000-32767
基于deployment创建的pod,可以使用的方式
kubectl expose deployment nginx --port=80 --target-port=80 --name=nginx-service --type=NodePort
--port=80 service的集群的端口
--target-port=80 pod内部容器的端口
[root@master01 ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 21h
myapp-test ClusterIP 10.96.245.162 <none> 30000/TCP 20h
nginx NodePort 10.96.169.187 <none> 80:30067/TCP 21h
nginx-service NodePort 10.96.20.230 <none> 80:30862/TCP 7s
10.96.20.230 集群内部的IP地址,外部是不可以访问这个IP地址的
80 对应的是内部的service的端口
30862 和内部的service的80端口做映射
pod内部的容器的端口是固定的,--port是service和容器映射的端口,可以是任意。
LoadBanlancer 如果service的类型设定为LoadBanlancer,映射地址(云平台提供LoadBanlancer的地址)这种用法仅用于公有云服务供应商在云平台上设置的service的场景,外部来访问,实现负载均衡,LoadBanlancer这个是地址是要付费的
创建好了service,指定类型为LoadBalancer,会给你提供一个地址来带代理pod内部的ip地址。
kubectl expose deployment nginx --port=80 --target-port=80 --name=nginx-service --type=LoadBanlancer
ExternaName: DNS映射,给service分配一个域名,通过域名来访问后端pod资源。
ExternalName的service类型,不能提供负载均衡必须要设置一个LoadBalancer的地址才可以实现。
kubectl expose deployment nginx --port=80 --target-port=80 -- --name=nginx-service --type=ExternalName
查看当前 nginx 的版本号
curl -I 20.0.0.70:30862
项目的生命周期
创建-----发布-----更新-----回滚-----删除
kubectl set image deployment nginx nginx=nginx:1.20
回滚
kubectl rollout history deployment nginx 查看回滚点
deployment.apps/nginx
REVISION CHANGE-CAUSE
1 <none>
2 <none>
数字的大小决定了距离上次操作的远近,数宁越大,就是你最近的一次操作
kubectl rollout undo deployment nginx --to-revision=1 回滚
kubectl get pod -w 动态查看回滚过程
检查回滚状态
kubectl rollout status deployment/nginx
kubectl set image deployment nginx nginx=nginx:1.15 --record 滚动更新 加上--record标识
kubectl get all 查看集群的所有信息
kubectl get all -o wide 查看集群的所有详细信息
kubectl get all -o wide -n kube-system 查看指定集群的命名空间的所有详细信息