Updated on 2025-05-07 GMT+08:00

Creating a Node Scaling Policy

CCE provides auto scaling through the CCE Cluster Autoscaler add-on. Nodes with different flavors can be automatically added across AZs on demand.

If both a node scaling policy and the configuration in the Autoscaler add-on take effect, for example, there are pods that cannot be scheduled and a metric value exceeds the threshold, a scale-out will be performed first for the unschedulable pods.

  • If the scale-out succeeds for the unschedulable pods, the system skips the metric-based rule logic and enters the next loop.
  • If the scale-out fails for the unschedulable pods, the metric-based rule is executed.

Prerequisites

Before using the node scaling function, you must install the CCE Cluster Autoscaler add-on of v1.13.8 or later in the cluster.

To use node flavor priorities, the Autoscaler version must be 1.19.35, 1.21.28, 1.23.30, 1.25.20, or later. To balance load among AZs, the version must be 1.23.122, 1.25.117, 1.27.85, 1.28.52, or later.

Notes and Constraints

  • If there are no nodes in a node pool, Autoscaler cannot obtain the CPU or memory data of the node, and the node scaling rule triggered using these metrics will not take effect.
  • If the driver of a GPU or NPU node is not installed, Autoscaler will determine that the node is not fully available and the node scaling rules triggered using the CPU or memory metrics will not take effect.
  • When CCE Cluster Autoscaler is used, some taints or annotations may affect auto scaling. Therefore, do not use the following taints or annotations in clusters:
    • ignore-taint.cluster-autoscaler.kubernetes.io: The taint works on nodes. Kubernetes-native Autoscaler supports protection against abnormal scale-outs and periodically evaluates the proportion of available nodes in the cluster. When the proportion of non-ready nodes exceeds 45%, protection will be triggered. In this case, all nodes with the ignore-taint.cluster-autoscaler.kubernetes.io taint in the cluster are filtered out from the Autoscaler template and recorded as non-ready nodes, which affect cluster scaling.
    • cluster-autoscaler.kubernetes.io/enable-ds-eviction: The annotation works on pods, which determines whether DaemonSet pods can be evicted by Autoscaler. For details, see Well-Known Labels, Annotations and Taints.

Configuring Node Pool Scaling Policies

  1. Log in to the CCE console and click the cluster name to access the cluster console.
  2. In the navigation pane, choose Nodes. On the Node Pools tab, locate the row containing the target node pool and click Auto Scaling.

    • If CCE Cluster Autoscaler is not installed, configure add-on parameters based on service requirements, click Install, and wait until the add-on is installed. For details about add-on configurations, see CCE Cluster Autoscaler.
    • If CCE Cluster Autoscaler has been installed, configure scaling policies.

  3. Configure auto scaling policies.

    AS Configuration

    • Customized Rule: Click Add Rule. In the dialog box displayed, configure parameters. You can add multiple node scaling policies, a maximum of one CPU usage-based rule, and one memory usage-based rule. The total number of rules cannot exceed 10.
      The following table lists custom rules.
      Table 1 Custom rules

      Rule Type

      Configuration

      Metric-based

      • Trigger: Select CPU allocation rate or Memory allocation rate and enter a value. The percentage must be greater than the value specified in the node resource requirements for a node scale-in when you configure a scaling policy (Configuring an Auto Scaling Policy for a Cluster).
        NOTE:
        • Resource allocation (%) = Resources requested by pods in the node pool/Resources allocatable to pods in the node pool
        • If multiple rules meet the conditions, the rules are executed in either of the following modes:

          If rules based on the CPU allocation rate and memory allocation rate are configured and two or more rules meet the scale-out conditions, the rule that will add the most nodes will be executed.

          If a rule is configured based on the CPU allocation rate and a periodic rule and both the rules meet the scale-out conditions, the periodic rule executed early changes the node pool to the scaling state. As a result, the metric-based rule cannot be executed. After the periodic rule is executed and the node pool status becomes normal, the metric-based rule will not be executed. If the metric-based rule is executed early, the periodic rule will be executed after the metric-based rule is executed.

        • If a rule is configured based on the CPU allocation rate and memory allocation rate, the policy detection period varies with the processing logic of each loop of the Autoscaler add-on. A scale-out is triggered once the conditions are met, but it is constrained by other factors such as the cooldown period and node pool status.
        • If the number of nodes reaches the upper limit of the cluster scale, the upper limit of the nodes supported in a node pool, or the upper limit of the nodes of a specific flavor, a metric-based scale-out will not be triggered.
        • If the number of nodes, CPUs, or memory resources reaches the upper limit for a node scale-out, a metric-based scale-out will not be triggered.
      • Action: Configure an action to be performed when the triggering condition is met.
        • Custom: Add a specified number of nodes to a node pool.
        • Auto calculation: When the trigger condition is met, nodes are automatically added and the allocation rate is restored to a value lower than the threshold. The formula is as follows:

          Number of nodes to be added = [Resource request of pods in the node pool/(Available resources of a single node x Target allocation rate)] – Number of current nodes + 1

      Periodic

      • Trigger Time: You can select a specific time every day, every week, every month, or every year.
      • Action: specifies an action to be carried out when the trigger time is reached. A specified number of nodes will be added to the node pool.
    • Nodes: The number of nodes in a node pool will always be within the range during auto scaling.
    • Cooldown Period: a period during which the nodes added in the current node pool cannot be scaled in.

    AS Object

    • Specifications: Configure whether to enable auto scaling for node flavors in a node pool.

      If multiple flavors are configured for a node pool, you can specify the upper limit for the number of nodes and the priority for each flavor separately.

  4. View cluster-level auto scaling configurations, which take effect for all node pools in the cluster. On this page, you can only view cluster-level auto scaling policies. To modify these policies, go to the Settings page. For details, see Configuring an Auto Scaling Policy for a Cluster.
  5. Click OK.

Configuring an Auto Scaling Policy for a Cluster

An auto scaling policy applies to all node pools in a cluster. After the policy is modified, the Autoscaler add-on will be restarted.

  1. Log in to the CCE console and click the cluster name to access the cluster console.
  2. In the navigation pane, choose Settings and click the Auto Scaling tab.

    • If CCE Cluster Autoscaler is not installed, configure add-on parameters based on service requirements, click Install, and wait until the add-on is installed. For details about add-on configurations, see CCE Cluster Autoscaler.
    • If CCE Cluster Autoscaler has been installed, configure scaling policies.

  3. Configure auto scale-out.

    • Auto Scale-out when the load cannot be scheduled: When workload pods in a cluster cannot be scheduled (pods remain in pending state), CCE automatically adds nodes to the slave node pool. If a pod has been scheduled to a node, the node will not be involved in an automatic scale-out. Such auto scaling typically works with an HPA policy. For details, see Using HPA and CA for Auto Scaling of Workloads and Nodes.

      If this function is not enabled, custom scaling rules are the only option for performing a scale-out.

    • Upper limit of resources to be expanded: the upper limit for the cluster's resources, such as the number of nodes, CPU cores, and memory. Once this limit is reached, no new nodes will be automatically added.
    • Scale-Out Priority: You can drag and drop the node pools in a list to adjust their scale-out priorities.

  4. Configure auto scale-in. Auto scale-in is disabled by default. After it is enabled, you can configure Node Scale-In Conditions and Node Scale-In Policy. If the nodes in the cluster meet the scale-in conditions, the node are removed automatically.

    Node Scale-In Conditions

    Table 2 Node scale-in conditions

    Parameter

    Description

    Default Scale-In Conditions

    If the CPU and memory allocation rates of a node are lower than a certain percentage (50% by default) for a period of time (10 minutes by default), or the node is unavailable for a period of time (20 minutes by default), the node will be scaled in.

    Allocation rate = Total requested resources of all pods/Allocatable resources on the node

    If the option Ignore the pre-allocated CPU and memory of the DaemonSet container is selected, CCE will not consider the CPU and memory resources pre-allocated to DaemonSet pods when determining whether to scale in cluster nodes. This means that the resources used by DaemonSet pods will not affect the scaling-in decision. If this option is not selected, the resources pre-allocated to DaemonSet pods will be included in the resource allocation calculations. This can cause the CPU and memory allocation rates to exceed the node scale-in threshold, potentially preventing nodes with low CPU and memory utilization from being scaled in.

    Scale-in Exception Scenarios

    When a node meets the following exception scenarios, CCE will not scale in the node even if the node resources or status meets scale-in conditions:
    • Resources on other nodes in the cluster are insufficient.
    • Scale-in protection is enabled on the node. To enable or disable node scale-in protection, choose Nodes in the navigation pane and then click the Nodes tab. Locate the target node, choose More, and then enable or disable node scale-in protection in the Operation column.
    • There is a pod with the non-scale label on the node.
    • Policies such as reliability have been configured on some containers on the node.
    • There are non-DaemonSet containers in the kube-system namespace on the node.
    • (Optional) A container managed by a third-party pod controller is running on a node. Third-party pod controllers are for custom workloads except Kubernetes-native workloads such as Deployments and StatefulSets. Such controllers can be created using CustomResourceDefinitions.
    Node Scale-In Policy
    Table 3 Node scale-in policy configurations

    Item

    Description

    Default Value

    Number of Concurrent Scale-In Requests

    Maximum number of idle nodes that can be deleted concurrently.

    Only idle nodes can be concurrently scaled in. Nodes that are not idle can only be scaled in one by one.

    NOTE:

    During a node scale-in, if the pods on the node do not need to be evicted (such as DaemonSet pods), the node is idle. Otherwise, the node is not idle.

    10

    Node Recheck Timeout

    Interval at which a node can be checked again after it is determined that the node cannot be scaled in

    5 minutes

    Cooldown Time

    Cooldown period for starting scale-in evaluation again after auto scale-in is triggered in a cluster

    NOTE:

    If both auto scale-out and scale-in exist in a cluster, set this parameter to 0 minutes. This prevents the node scale-in from being blocked due to continuous scale-out of some node pools or retries upon a scale-out failure, which results in unexpected waste of node resources.

    10 minutes

    Cooldown period for starting scale-in evaluation again after auto scale-out is triggered in a cluster

    10 minutes

    Cooldown period for starting scale-in evaluation again after auto scale-in triggered in a cluster failed

    3 minutes

  5. Click Confirm configuration.

Cooldown Period

The impact and relationship between the two cooldown periods configured for a node pool are as follows:

Cooldown Period During a Scale-out

This interval indicates the period during which nodes added to the current node pool after a scale-out cannot be deleted. This setting takes effect in the entire node pool.

Cooldown Period During a Scale-in

The interval after a scale-out indicates the period during which the entire cluster cannot be scaled in after the Autoscaler add-on triggers a scale-out (due to the unschedulable pods, metrics, or scaling policies). This interval applies to the entire cluster.

The interval after a node is deleted indicates the period during which the cluster cannot be scaled in after the Autoscaler add-on triggers a scale-in. This setting applies to the entire cluster.

The interval after a failed scale-in indicates the period during which the cluster cannot be scaled in after the Autoscaler add-on triggers a scale-in. This setting applies to the entire cluster.

Period for Autoscaler to Retry a Scale-out

If a node pool failed to scale out, for example, due to insufficient resources or quota, or an error occurred during node installation, Autoscaler can retry the scale-out in the node pool or switch to another node pool. The retry period varies depending on failure causes:

  • When resources in a node pool are sold out or the user quota is insufficient, Autoscaler cools down the node pool for 5 minutes, 10 minutes, or 20 minutes. The maximum cooldown duration is 30 minutes. Then, Autoscaler switches to another node pool in the next 10 seconds for a scale-out until the expected nodes are added or all node pools have cooled down.
  • If an error occurred during node installation in a node pool, the node pool enters a 5-minute cooldown period. After the period expires, Autoscaler can trigger a node pool scale-out again. If the faulty node is automatically reclaimed, Autoscaler re-evaluates the cluster status within 1 minute and triggers a node pool scale-out as needed.
  • During a node pool scale-out, if a node remains in the installing state for a long time, Autoscaler tolerates the node for a maximum of 15 minutes. After the tolerance period expires, Autoscaler re-evaluates the cluster status and triggers a node pool scale-out as needed.

Example YAML

The following is a YAML example of a node scaling policy:

apiVersion: autoscaling.cce.io/v1alpha1
kind: HorizontalNodeAutoscaler
metadata:
  name: xxxx
  namespace: kube-system
spec:
  disable: false
  rules:
  - action:
      type: ScaleUp
      unit: Node
      value: 1
    cronTrigger:
      schedule: 47 20 * * *
    disable: false
    ruleName: cronrule
    type: Cron
  - action:
      type: ScaleUp
      unit: Node
      value: 2
    disable: false
    metricTrigger:
      metricName: Cpu
      metricOperation: '>'
      metricValue: "40"
      unit: Percent
    ruleName: metricrule
    type: Metric
  targetNodepoolIds:
  - 7d48eca7-3419-11ea-bc29-0255ac1001a8
Table 4 Key parameters

Parameter

Type

Description

spec.disable

Bool

Whether to enable the scaling policy. This parameter takes effect for all rules in the policy.

spec.rules

Array

All rules in a scaling policy.

spec.rules[x].ruleName

String

Rule name.

spec.rules[x].type

String

Rule type. Cron and Metric are supported.

spec.rules[x].disable

Bool

Rule switch. Currently, only false is supported.

spec.rules[x].action.type

String

Rule action type. Currently, only ScaleUp is supported.

spec.rules[x].action.unit

String

Rule action unit. Currently, only Node is supported.

spec.rules[x].action.value

Integer

Rule action value.

spec.rules[x].cronTrigger

N/A

Optional. This parameter is valid only in periodic rules.

spec.rules[x].cronTrigger.schedule

String

Cron expression of a periodic rule.

spec.rules[x].metricTrigger

N/A

Optional. This parameter is valid only in metric-based rules.

spec.rules[x].metricTrigger.metricName

String

Metric of a metric-based rule. Currently, Cpu and Memory are supported.

spec.rules[x].metricTrigger.metricOperation

String

Comparison operator of a metric-based rule. Only > is supported.

spec.rules[x].metricTrigger.metricValue

String

Threshold of the metric rule. The value can be an integer ranging from 1 to 100 and must be a character. If the value is set to -1, the threshold is automatically calculated.

spec.rules[x].metricTrigger.Unit

String

Unit of the metric-based rule threshold. Only % is supported.

spec.targetNodepoolIds

Array

All node pools associated with the scaling policy.

spec.targetNodepoolIds[x]

String

UID of the node pool associated with the scaling policy.