Horizontal Pod Autoscaling:po的水平自动伸缩
这是k8s自带的模块
pod占用cpu比例达到一定的阀值,会触发伸缩机制。
根据cpu的阀值触发伸缩机制
replication controller 副本控制器 控制pod的副本数
deployment controller 节点控制器 部署pod
hpa控制副本的数量,以及如何控制部署pod
1、hpa基于kube-controll-manager服务,周期性的检测pod的cpu使用率,默认30秒检测一次
2、hpa和replication controller,deployment controller都属于k8s的资源对象。通过跟踪分析副本控制器和deployment的pod的负载变化,针对性的调整目标pod的副本数
阀值:在正常情况下,pod的副本数,以及达到阀值之后,pod的扩容最大数量
3、组件:metrics-server 部署到集群中,对外提供度量的数据
1、定义pod时候必须要有资源限制,否则hpa无法进行监控
2、扩容是即时的,只要超过阀值会立刻扩容,不是立刻扩容到最大副本数,会在最大值和最小值波动,如果扩容的数量满足了需求,不会在扩容
3、缩容是缓慢的,如果业务的峰值较高,回收的策略太积极的话,可能会产生业务的崩溃,缩容的速度比较慢的
周期性的获取数据,缩容的机制
//在所有 Node 节点上传 metrics-server.tar 镜像包到 /opt 目录
cd /opt/
docker load -i metrics-server.tar
#在主master节点上执行
kubectl apply -f components.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: centos-test
labels:
test: centos1
spec:
replicas: 1
selector:
matchLabels:
test: centos1
template:
metadata:
labels:
test: centos1
spec:
containers:
- name: centos
image: centos:7
command: ["/bin/bash", "-c", "yum -y install epel-release;yum -y install stress;sleep 3600"]
resources:
limits:
cpu: "1"
memory: 512Mi
---
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: hpa-centos7
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: centos-test
minReplicas: 1
maxReplicas: 5
metrics:
- type: Resource
resource:
name: cpu
targetAverageUtilization: 50
ResourceQuota:命名空间进行资源限制
apiVersion: apps/v1
kind: Deployment
metadata:
name: centos-test1
namespace: test1
labels:
test: centos2
spec:
replicas: 6
selector:
matchLabels:
test: centos2
template:
metadata:
labels:
test: centos2
spec:
nodeSelector:
kubernetes.io/hostname: node01
containers:
- name: centos
image: centos:7
command: ["/bin/bash", "-c", "yum -y install epel-release;yum -y install stress;sleep 3600"]
resources:
limits:
cpu: "1"
memory: 512Mi
---
apiVersion: v1
kind: ResourceQuota
metadata:
name: ns-resource
namespace: test1
spec:
hard:
pods: "10"
requests.memory: 1Gi
limits.cpu: "4"
limits.memory: 2Gi
configmaps: "10"
#在当前这个命名空间能创建最大configmap数量10个
persistentvolumeclaims: "4"
#当前命名空间只能使用4个pvc
secrets: "9"
#创建贾母的secret只能9个
services: "5"
#创建service个数只能五个
services.nodeports: "2"
#创建nodeport类型的svc只能两个
只能在命名空间创建两个service第三个就不创建了因为限制了最多只能创建两个
apiVersion: apps/v1
kind: Deployment
metadata:
name: centos-test3
namespace: test3
labels:
test: centos3
spec:
replicas: 1
selector:
matchLabels:
test: centos3
template:
metadata:
labels:
test: centos3
spec:
containers:
- name: centos3
image: centos:7
command: ["/bin/bash", "-c", "yum -y install epel-release;yum -y install stress;sleep 3600"]
---
apiVersion: v1
kind: LimitRange
metadata:
name: test3-limit
namespace: test3
spec:
limits:
- default:
memory: 512Mi
cpu: "1"
defaultRequest:
memory: 256Mi
cpu: "0.5"
type: Container
1、手动方式kubectl scale deployment nginx1 --replicas=5,kubectl edit,修改yaml文件 apply -f
2、自动扩缩容 hpa 监控指标是cpu和内存没关系
资源限制
pod资源限制
命名空间资源限
ucky-cloud项目--部署在test1的命名空间,如果lucky-cloud不做限制,或者命名空间不做限制,他会依然会沾满所有集群资源
k8s集群部署pod的最大数量:10000
busybox:就是服务最小化的centos 4M
哪些服务会部署在k8s当中
中间件 kafka: 6
redis: 3
apiVersion: apps/v1
kind: Deployment
metadata:
name: centos-test8
labels:
test: centos8
spec:
replicas: 3
selector:
matchLabels:
test: centos8
template:
metadata:
labels:
test: centos8
spec:
nodeSelector:
kubernetes.io/hostname: node01
containers:
- name: centos8
image: centos:7
command: ["/bin/bash", "-c", "yum -y install epel-release;yum -y install stress;sleep 3600"]
resources:
limits:
cpu: "2"
memory: 512Mi
---
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: hpa-centos7
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: centos-test8
minReplicas: 1
maxReplicas: 5
targetCPUUtilizationPercentage: 50
HPA的自动扩缩容
命令空间两种方式
ResourceQuota:可以对命名空间进行资源限制
第二种LimitRange:直接声明在命名空间当中创建pod,容器的资源限制,只是一种统一的限制,所有的pod都受这个条件的制约
pod资源限制 一般是我们创建的时候声明好的,必加选项
resources
limit
命名空间资源限制:对命名空间使用cpu和内存一定会做限制通过
核心:防止整个集群的资源被一个服务或者一个命名空间沾满
ResourceQuata
命名空间统一资源限制在pod LimitRange