K8S如何扩展副本集

发布时间:2024年01月09日

Scaling ReplicaSets 扩展副本集

ReplicaSets are scaled up or down by updating the spec.replicas key on the ReplicaSet object stored in Kubernetes. When a ReplicaSet is scaled up, new Pods are submitted to the Kubernetes API using the Pod template defined on the ReplicaSet.

通过更新存储在 Kubernetes 中的 ReplicaSet 对象上的 spec.replicas 关键字,可以扩大或缩小 ReplicaSet 的规模。当扩大 ReplicaSet 时,新的 Pod 会使用 ReplicaSet 上定义的 Pod 模板提交给 Kubernetes API。

Imperative Scaling with kubectl scale 使用 kubectl scale 进行强制扩展

The easiest way to achieve this is using the scale command in kubectl. For example, to scale up to four replicas you could run:

最简单的方法是使用 kubectl 中的 scale 命令。例如,要扩展到四个副本,可以运行

kubectl scale replicasets kuard --replicas=4

While such imperative commands are useful for demonstrations and quick reactions to emergency situations (e.g., in response to a sudden increase in load), it is important to also update any text-file configurations to match the number of replicas that you set via the imperative scale command. The reason for this becomes obvious when you consider the following scenario:

虽然此类命令式命令对于演示和快速应对紧急情况(如负载突然增加)非常有用,但重要的是也要更新任何文本文件配置,使其与通过命令式 scale 命令设置的副本数量相匹配。考虑到以下情况,这样做的原因就显而易见了:

Alice is on call, when suddenly there is a large increase in load on the service she is managing. Alice uses the scale command to increase the number of servers responding to requests to 10, and the situation is resolved. However, Alice forgets to update the ReplicaSet configurations checked into source control. Several days later, Bob is preparing the weekly rollouts. Bob edits the ReplicaSet configurations stored in version control to use the new container image, but he doesn’t notice that the number of replicas in the file is currently 5, not the 10 that Alice set in response to the increased load. Bob proceeds with the rollout, which both updates the container image and reduces the number of replicas by half, causing an immediate overload or outage.

爱丽丝正在值班,突然她管理的服务负载大幅增加。Alice 使用 scale 命令将响应请求的服务器数量增加到 10 台,情况得到了解决。但是,Alice 忘记更新已检查到源控制中的 ReplicaSet 配置。几天后,Bob 正在准备每周发布。Bob 编辑了存储在版本控制中的 ReplicaSet 配置,以使用新的容器映像,但他没有注意到文件中的副本数量目前是 5 个,而不是 Alice 为应对负载增加而设置的 10 个。鲍勃继续推出,既更新了容器映像,又将副本数量减少了一半,导致立即过载或中断。

Hopefully, this illustrates the need to ensure that any imperative changes are immediately followed by a declarative change in source control. Indeed, if the need is not acute, we generally recommend only making declarative changes as described in the following section.

希望这能说明有必要确保在源代码控制中进行任何命令式修改后,立即进行声明式修改。事实上,如果需求并不迫切,我们一般建议只进行声明式修改,如下节所述。

Declaratively Scaling with kubectl apply 使用 kubectl apply 进行声明式扩展

In a declarative world, we make changes by editing the configuration file in version control and then applying those changes to our cluster. To scale the kuard ReplicaSet, edit the kuard-rs.yaml configuration file and set the replicas count to 3:

在声明式世界中,我们通过编辑版本控制中的配置文件进行更改,然后将这些更改应用到集群中。要扩展 kuard ReplicaSet,请编辑 kuard-rs.yaml 配置文件并将副本数设置为 3:

...
spec:
	replicas: 3
...

In a multiuser setting, you would like to have a documented code review of this change and eventually check the changes into version control. Either way, you can then use the kubectl apply command to submit the updated kuard ReplicaSet to the API server:

在多用户环境中,您希望对这一变更进行有记录的代码审查,并最终将变更检查到版本控制中。无论采用哪种方式,您都可以使用 kubectl apply 命令将更新后的 kuard ReplicaSet 提交到 API 服务器:

kubectl apply -f kuard-rs.yaml

Now that the updated kuard ReplicaSet is in place, the ReplicaSet controller will detect that the number of desired Pods has changed and that it needs to take action to realize that desired state. If you used the imperative scale command in the previous section, the ReplicaSet controller will destroy one Pod to get the number to three.

现在,更新后的 kuard ReplicaSet 已经就位,ReplicaSet 控制器将检测到所需 Pod 的数量已发生变化,它需要采取行动来实现所需的状态。如果您在上一节中使用了强制缩放命令,ReplicaSet 控制器将摧毁一个 Pod,使数量变为三个。

Otherwise, it will submit two new Pods to the Kubernetes API using the Pod template defined on the kuard ReplicaSet. Regardless, use the kubectl get pods command to list the running kuard Pods. You should see output like the following:

否则,它会使用 kuard ReplicaSet 上定义的 Pod 模板向 Kubernetes API 提交两个新 Pod。无论如何,使用 kubectl get pods 命令列出正在运行的 kuard Pod。你应该会看到类似下面的输出:

kubectl get pods

Autoscaling a ReplicaSet 自动扩展副本集

While there will be times when you want to have explicit control over the number of replicas in a ReplicaSet, often you simply want to have “enough” replicas. The definition varies depending on the needs of the containers in the ReplicaSet. For example, with a web server like NGINX, you may want to scale due to CPU usage. For an in-memory cache, you may want to scale with memory consumption. In some cases you may want to scale in response to custom application metrics. Kubernetes can handle all of these scenarios via Horizontal Pod Autoscaling (HPA).

虽然有时您希望对 ReplicaSet 中的副本数量进行明确控制,但通常您只希望拥有 "足够多 "的副本。这个定义根据 ReplicaSet 中容器的需求而有所不同。例如,对于像 NGINX 这样的网络服务器,您可能希望根据 CPU 使用情况进行扩展。对于内存缓存,可能希望根据内存消耗量进行扩展。在某些情况下,您可能希望根据自定义应用指标进行扩展。Kubernetes 可以通过水平 Pod 自动扩展(HPA)处理所有这些情况。

HPA requires the presence of the heapster Pod on your cluster. heapster keeps track of metrics and provides an API for consuming metrics that HPA uses when making scaling decisions. Most installations of Kubernetes include heapster by default. You can validate its presence by listing the Pods in the kube-system namespace:

HPA 需要在集群上安装 heapster Pod。heapster 会跟踪度量指标,并提供用于消费度量指标的 API,HPA 在做出扩展决策时会用到这些指标。大多数 Kubernetes 安装默认都包含 heapster。你可以通过列出 kube-system 命名空间中的 Pod 来验证它的存在:

kubectl get pods --namespace=kube-system

You should see a Pod named heapster somewhere in that list. If you do not see it, autoscaling will not work correctly.

你应该能在列表中看到一个名为 heapster 的 Pod。如果看不到,自动缩放将无法正常工作。

“Horizontal Pod Autoscaling” is kind of a mouthful, and you might wonder why it is not simply called “autoscaling.” Kubernetes makes a distinction between horizontal scaling, which involves creating additional replicas of a Pod, and vertical scaling, which involves increasing the resources required for a particular Pod (e.g., increasing the CPU required for the Pod). Vertical scaling is not currently implemented in Kubernetes, but it is planned. Additionally, many solutions also enable cluster autoscaling, where the number of machines in the cluster is scaled in response to resource needs, but this solution is not covered here.

"水平 Pod 自动扩展 "有点拗口,你可能会问,为什么不干脆叫 "自动扩展 "呢?Kubernetes 区分了水平扩展和垂直扩展,前者涉及为 Pod 创建额外的副本,后者涉及增加特定 Pod 所需的资源(例如,增加 Pod 所需的 CPU)。Kubernetes 目前尚未实现垂直扩展,但已在计划中。此外,许多解决方案还支持集群自动扩展,即根据资源需求扩展集群中的机器数量,但这里不涉及这一解决方案。

AUTOSCALING BASED ON CPU 基于 CPU 的自动扩展

Scaling based on CPU usage is the most common use case for Pod autoscaling. Generally it is most useful for request-based systems that consume CPU proportionally to the number of requests they are receiving, while using a relatively static amount of memory.

基于 CPU 使用情况进行扩展是 Pod 自动扩展最常见的使用案例。一般来说,它对基于请求的系统最有用,因为这些系统消耗的 CPU 与接收到的请求数量成比例,同时使用相对固定的内存量。

To scale a ReplicaSet, you can run a command like the following:

要缩放 ReplicaSet,可以运行类似下面的命令:

kubectl autoscale rs kuard --min=2 --max=5 --cpupercent=80

This command creates an autoscaler that scales between two and five replicas with a CPU threshold of 80%. To view, modify, or delete this resource you can use the standard kubectl commands and the horizontalpodautoscalers resource. horizontalpodautoscalers is quite a bit to type, but it can be shortened to hpa:

该命令创建了一个自动扩展器,可在 2 到 5 个副本之间扩展,CPU 阈值为 80%。要查看、修改或删除该资源,可以使用标准的 kubectl 命令和 horizontalpodautoscalers 资源。 horizontalpodautoscalers 的键入量很大,但可以简称为 hpa:

kubectl get hpa

Because of the decoupled nature of Kubernetes, there is no direct link between the HPA and the ReplicaSet. While this is great for modularity and composition, it also enables some anti-patterns. In particular, it’s a bad idea to combine both autoscaling and imperative or declarative management of the number of replicas. If both you and an autoscaler are attempting to modify the number of replicas, it’s highly likely that you will clash, resulting in unexpected behavior.

由于 Kubernetes 的解耦性质,HPA 和 ReplicaSet 之间没有直接联系。虽然这对模块化和组合非常有利,但也会带来一些反模式。尤其是,将自动伸缩和对副本数量的命令式或声明式管理结合起来不是一个好主意。如果你和自动伸缩器都试图修改副本数量,很可能会发生冲突,导致意想不到的行为。

文章来源:https://blog.csdn.net/qq_37703224/article/details/135477615
本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。