Pam Andrejko, Barry Mosakowski When running an OpenShift cluster, sometimes you need to update your nodes, for example if you upgrade to a new version of OCP, or apply a change with an oc patch image.config.openshift.io/cluster command, or apply an ImageContentSourcePolicy . Changes like this often require all the nodes in the cluster to be restarted. The Problem Occasionally, one of the nodes fails to restart which is problematic, because the updates need to be pushed to every node, and obviously every node in the cluster needs to be Schedulable and Ready . How do you know if you have this problem? Run the command: oc get machineconfigpool Monitor the MACHINECOUNT column which shows the total number of nodes in your cluster and compare it to the values in the R EADYMACHINECOUNT column: NAME CONFIG MACHINECOUNT READYMACHINECOUNT master rendered-master-xxxx 3 0 worker rendered-worker-yyyy 6
What and why is OpenShift CPU throttling? Turbonomic to the rescue! Problem and Terminology If you've used Turbonomic to optimize your cluster resources, you've seen it flag certain containers as being throttled. What exactly does that mean and why is it so important to address? In Kubernetes, pod CPU requirements are defined in a pod specification by setting CPU requests and limits. The CPU request is the baseline amount of CPU that is allocated to the pod and the CPU limit is how high the CPU allocation can scale, if needed for the pod. You define the CPU requests and limits in terms of millicores (m) where 1000m is one core. Thus, 1000m = 1 core = 1 vCPU OpenShift, a Kubernetes based container platform, uses the Kubernetes concept of CPU throttling to enforce the CPU limit. The key to understanding throttling is that by default in Kubernetes, CPU allocation is based on a time period (100ms) of CPU and not on available CPU power. So even t