https://www.wulaoer.org/?p=2359,这里主要说一下cronhpa和hpa的同时使用,不过在说cronhpa之前需要安装cronhpa,可以参考: https://www.wulaoer.org/?p=2363
场景
因为cronhpa的伸缩值低于预期,所以资源不足,所以加上了HPA,但是HPA同时使用时,cronhpa的定时伸缩后,HPA就把副本数缩回去了,这样的结果不是我想要的,应该在定时伸缩后,副本数不变,如果资源不足时,HPA会根据资源使用率进行扩,但是缩的数在定时任务期间不能低于定时启动的数量。看下面的例子:
apiVersion: autoscaling.alibabacloud.com/v1beta1 kind: CronHorizontalPodAutoscaler metadata: labels: controller-tools.k8s.io: "1.0" name: nginx-v1-crontab-hpa #定时伸缩名称 namespace: ops-tem #伸缩的命名空间 spec: scaleTargetRef: apiVersion: apps/v1 #注意1.21版本使用apps/v1,1.18版本用的是apps/v1beta2 kind: Deployment name: nginx-v1 #需要伸缩的服务名称 jobs: - name: "Tuesday-up" #扩展任务名 schedule: "0 51 15 * * 3" #缩时间 targetSize: 4 #保留副本数 runOnce: true - name: "Tuesday-down" #缩任务名 schedule: "0 55 15 * * 3" #伸时间 targetSize: 2 #共伸缩副本数 - name: "Thursday-up" #扩展任务名 schedule: "0 30 23 * * 3" #缩时间 targetSize: 3 #保留副本数 runOnce: true - name: "Thursday-down" schedule: "0 30 1 * * 6" #伸时间 targetSize: 2 #共伸缩副本数
执行后,扩展没有几分钟,然后又缩回去了,查看一下当时状态
[wolf@wulaoer.org🔥🔥🔥🔥 hpa]# kubectl describe cronhpa -n ops-tem nginx-v1-crontab-hpa Name: nginx-v1-crontab-hpa Namespace: ops-tem Labels: controller-tools.k8s.io=1.0 Annotations: <none> API Version: autoscaling.alibabacloud.com/v1beta1 Kind: CronHorizontalPodAutoscaler Metadata: Creation Timestamp: 2022-06-01T06:52:35Z Generation: 13 Managed Fields: API Version: autoscaling.alibabacloud.com/v1beta1 Fields Type: FieldsV1 fieldsV1: f:metadata: f:annotations: .: f:kubectl.kubernetes.io/last-applied-configuration: f:labels: .: f:controller-tools.k8s.io: f:spec: .: f:scaleTargetRef: .: f:apiVersion: f:kind: f:name: Manager: kubectl-client-side-apply Operation: Update Time: 2022-06-01T07:50:04Z API Version: autoscaling.alibabacloud.com/v1beta1 Fields Type: FieldsV1 fieldsV1: f:spec: f:excludeDates: f:jobs: f:status: .: f:conditions: f:excludeDates: f:scaleTargetRef: .: f:apiVersion: f:kind: f:name: Manager: kubernetes-cronhpa-controller Operation: Update Time: 2022-06-01T07:50:05Z Resource Version: 27323366 UID: 792e3172-6180-487a-a949-7d2ef3f8628b Spec: Exclude Dates: <nil> Jobs: Name: Tuesday-up Run Once: true Schedule: 0 51 15 * * 3 Target Size: 4 Name: Tuesday-down Run Once: false Schedule: 0 55 15 * * 3 Target Size: 2 Name: Thursday-up Run Once: true Schedule: 0 30 23 * * 3 Target Size: 3 Name: Thursday-down Run Once: false Schedule: 0 30 1 * * 3 Target Size: 2 Scale Target Ref: API Version: apps/v1 Kind: Deployment Name: nginx-v1 Status: Conditions: Job Id: 27bf6565-40c3-4b36-ab43-d7fa4479c575 Last Probe Time: 2022-06-01T07:50:05Z Message: Name: Tuesday-down Run Once: false Schedule: 0 55 15 * * 3 State: Submitted Target Size: 2 Job Id: 7fe2b9be-341b-49b6-9159-b658b406a3fc Last Probe Time: 2022-06-01T07:50:05Z Message: Name: Thursday-up Run Once: true Schedule: 0 30 23 * * 3 State: Submitted Target Size: 3 Job Id: ee070282-f7a0-45cb-b147-b951e409eebb Last Probe Time: 2022-06-01T07:50:05Z Message: Name: Thursday-down Run Once: false Schedule: 0 30 1 * * 3 State: Submitted Target Size: 2 Job Id: 3c3374b1-c87d-4c88-a60c-1651f85f352c Last Probe Time: 2022-06-01T07:51:02Z Message: cron hpa job Tuesday-up executed successfully. current replicas:2, desired replicas:4. Name: Tuesday-up Run Once: true Schedule: 0 51 15 * * 3 State: Succeed Target Size: 4 Exclude Dates: <nil> Scale Target Ref: API Version: apps/v1 Kind: Deployment Name: nginx-v1 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning Failed 50m cron-horizontal-pod-autoscaler cron hpa failed to execute,because of failed to scale (namespace: ops-tem;kind: Deployment;name: hpa-kbn4v) to 4 after retrying 11 times and exit,because of Failed to found source target hpa-kbn4v Normal Succeed 31m cron-horizontal-pod-autoscaler cron hpa job Tuesday-down executed successfully. Skip scale replicas because HPA hpa-kbn4v current replicas:4 >= desired replicas:2. Normal Succeed 25s (x3 over 41m) cron-horizontal-pod-autoscaler cron hpa job Tuesday-up executed successfully. current replicas:2, desired replicas:4.
cronhpa和hpa同时使用
下面提示cronhpa调用成功,但是因为hpa的调用,又缩回来了,我们需要修改一下cronhpa的配置文件。
apiVersion: autoscaling.alibabacloud.com/v1beta1
kind: CronHorizontalPodAutoscaler
metadata:
labels:
controller-tools.k8s.io: "1.0"
name: nginx-v1-crontab-hpa #定时伸缩名称
namespace: ops-tem #伸缩的命名空间
spec:
scaleTargetRef:
apiVersion: autoscaling/v2beta1 #注意1.21版本使用apps/v1,1.18版本用的是apps/v1beta2
kind: HorizontalPodAutoscaler
name: hpa-kbn4v #需要伸缩的服务名称
jobs:
- name: "Tuesday-up" #扩展任务名
schedule: "0 55 15 * * 3" #缩时间
targetSize: 4 #保留副本数
runOnce: true
- name: "Tuesday-down" #缩回任务名
schedule: "0 00 16 * * 3" #伸时间
targetSize: 2 #共伸缩副本数
- name: "Thursday-up" #扩展任务名2
schedule: "0 30 23 * * 3" #缩时间
targetSize: 3 #保留副本数
runOnce: true
- name: "Thursday-down"
schedule: "0 30 1 * * 3" #伸时间
targetSize: 2 #共伸缩副本数
这里修改了scaleTargetRef下的三项,主要作用是在HPA的基础上进行伸缩,如果资源不足HPA会进行扩容,所以HPA的最大数一定要大于cronhap的启动数,下面的name是HPA的名称,下面是获取hpa名称
[wolf@wulaoer.org🔥🔥🔥🔥 hpa]# kubectl get hpa -n ops-tem NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE hpa-kbn4v Deployment/nginx-v1 0%/65%, 0%/65% 2 10 2 93m
HPA(MIN/MAX) | CRONHPA | DEPLOYMENT | 扩缩结果 | 兼容规则说明 |
---|---|---|---|---|
1/10 | 5 | 5 | HPA(min/max):1/10Deployment:5 | 当CronHPA中的目标副本数和当前副本数一致时,HPA中的最大和最小副本数,还有应用当前的副本数无需变更。 |
1/10 | 4 | 5 | HPA(min/max):1/10Deployment:5 | 当CronHPA中的目标副本数低于当前副本数时,保留当前副本数。 |
1/10 | 6 | 5 | HPA(min/max):6/10Deployment:6 | 当CronHPA中的目标副本数高于当前副本数时,保留CronHPA的目标副本数。CronHPA目标副本数高于HPA副本数下限(minReplicas)时,修改HPA的副本数下限。 |
5/10 | 4 | 5 | HPA(min/max):4/10Deployment:5 | 当CronHPA中的目标副本数低于当前副本数时,保留当前应用副本数。CronHPA目标副本数低于HPA副本数下限(minReplicas)时,修改HPA的副本数下限。 |
5/10 | 11 | 5 | HPA(min/max):11/11Deployment:11 | 当CronHPA中的目标副本数高于当前副本数时,保留CronHPA的目标副本数。CronHPA目标副本数高于HPA副本数上限(maxReplicas)时,修改HPA的副本数上限。 |
您可以选择一种方式赞助本站
支付宝扫一扫赞助
微信钱包扫描赞助
赏